 All right. Hello, everybody. Welcome to the multi-party computation session number one. We're going to start off with secure computation from elastic noisy channels. Work by Takshita Kurana, Hemant Amaji, and Amit Sahay. And Takshita will give us the talk. I'm Takshita Kurana from UCLA. And this is joint work with Hemant Amaji and Amit Sahay. Thanks, Duel, for the introduction. So let me start by giving you some background on secure multi-party computation. Suppose there's a set of parties, each of whom have private secret inputs. And they'd like to jointly evaluate some function on these inputs, such that they all get the output of the functionality, but learn nothing else about each other's input. Now, if they had access to a trusted third guy that would do this job for them, then things would be easy. Indeed, no matter how any party behaved, he would still not be able to learn the input of any other participant. But in reality, parties might not have access to such a trusted party all the time. And they might have to emulate the trusted guy by themselves by running an algorithm. The security guarantees that we'd like from this algorithm are that if any of these parties or a single party or even a subset of the parties get corrupted, even in that case, the input of the honest party remains hidden. So it turns out that this most general notion of secure computation is impossible in the plain model unless you're willing to make assumptions. So in particular, in order to perform multi-party or two party computation, we would have to make either computational assumptions or some assumptions on the corruption structure of parties or assumptions about the randomness that's already available to parties or some sort of physical assumptions. And in this paper, what we are concerned with is the physical assumption of noisy channels. It suffices to realize oblivious transfer from noisy channels in order to realize general secure computation. Now, oblivious transfer is a two-party primitive where there's a sender who has inputs x0 and x1 and a receiver that has input some choice bit c. The oblivious transfer functionality outputs xc to the receiver and does not generate any output for the sender. The security guarantee that we want from this protocol is that the sender should not be able to recover c, which is the choice bit of the receiver. And the message that the receiver did not choose to get should remain completely hidden from the receiver. So our goal will be to implement this functionality unconditionally based on any elastic noisy channel. And now let me tell you what an elastic noisy channel is. So an elastic erasure channel considers the following functionality. Suppose Alice sent a couple of bits, suppose a sender sent a couple of bits to a receiver over an erasure channel. And suppose the receiver, that means that the receiver got some of the bits accurately and got erasures at some other places. And for example, this was an erasure channel that erased every bit independently with probability 1 third. Now, instead of this receiver, there could exist a malicious receiver that places a much better antenna on the receiver's end, which ends up decreasing the probability of erasure on the receiver's side. So in particular, with this better antenna, the receiver only gets 1 over 6 erasures. And now we would like to design protocols which work correctly, even if the receiver has erasure probability 1 over 3. But are secure if the receiver is malicious and has instead installed a better antenna which increases his reception probability. And the sender doesn't know what antenna the receiver is using. And moving to the more general case of elastic binary symmetric channels, in these channels, there is a transmitter that sends bits to a receiver. And some of the bits get flipped. So not all bits appear as they are. For example, this could be a BSC that flips the sender's bits with probability 1 over 3. Now a malicious receiver, similar to the erasure case, can here install a better antenna on his end, which results in him getting much fewer flips. So instead of getting flips with probability 1 over 3, the receiver now ends up getting flips only with probability 1 over 6. And therefore, he has increased the reliability of communication on his side, but without the sender knowing this. So now, again, we'd like protocols that work correctly, even if the receiver has flip probability 1 over 3, but are secure, even in the presence of receivers, which make their communication much more reliable. And it turns out that protocols that realize oblivious transfer from standard binary symmetric channels actually fail in this setting. So this adversarial model is not entirely new. It was introduced by Damgard et al. in a paper on unfair noisy channels, where they considered a receiver similar to our case, which installs this antenna and increases his reliability. But they also consider a malicious sender who installs a leaky sending device that gives him extra information on what bits the receiver obtained. So in a standard BSE, the sender is not supposed to know whether a bit got flipped or not. But in an unfair noisy channel, a malicious sender can gain extra leakage, which gives him this information about some of the bits that he sent. So in this case, both a malicious receiver can change the channel characteristic by making it more reliable. And a malicious sender can learn some extra leakage about what bits got flipped. In the context of specifically wireless channels, such an adversary may not be possible unless it can get really close to the receiver. So this is one of the physical motivations for studying a scenario where only a corrupted receiver places a larger antenna and the sender cannot cheat in this way. The other reason that we consider the weaker setting of elastic noisy channels is that this setting suffers from strong impossibility results. And in particular, it's impossible to realize oblivious transfer for a large range of parameters of unfair noisy channels, whereas that is not true for elastic noisy channels. And this was also, so this impossibility was also shown in the same paper. So there's been 25 years of research on obtaining oblivious transfer based on noisy channels. And like I already told you, unfair noisy channels are already a stronger model than elastic channels. So it turns out that techniques developed for the case of unfair noisy channels can already be applied to the elastic channel setting to get feasibility results for some parameters. And those parameters are shaded here in the green region. The red region shows the range of parameters of elastic noisy channels for which we develop new techniques specific to this setting to get OT from more diverse elastic channels. So for example here, the x-axis denotes the malicious flip probability. So if the biggest baddest receiver in the market has a flip probability of 0.1, then the honest receiver, then in previous work, the honest receiver needed a flip probability of 0.2, whereas after our work, it suffices for the honest receiver to even use a device that has as high a flip probability as 0.4. So that's just some perspective on the parameters we obtained. And now I'll move directly to our techniques. Let me start by giving a warm-up example for the case of elastic a-ratio channels. In this setting, Alice sends a bit B over an a-ratio channel. And Bob either receives the bit or receives an a-ratio. And like I told you already, the security guarantee is that Alice doesn't know which bits got erased. And Bob, if he got an a-ratio, does not know the bit that was sent. So in an elastic a-ratio channel, if the receiver is honest, then his bits get dropped with probability alpha. And if the receiver is malicious, then his bits get dropped with probability beta that's smaller than alpha. So he's getting less drops. And now the first goal will be to illustrate a protocol that obtains oblivious transfer from any elastic channel which is modeled like this. So it turns out that the protocol that I will tell you is not entirely new. And in fact, these protocols and techniques were already implicit in a lot of prior work. So the main new techniques in our protocol are for the setting of elastic BSCs, which I will get to after developing some intuition. So here's the protocol. Suppose there's a sender that has inputs x0 and x1 for the OT. And suppose the receiver's input choice bit was 0. I'm just doing this for simplicity. Obviously, he can use any choice bit in the protocol. Then the sender just picks a bunch of random bits and sends them over this erasure channel to the receiver. The receiver obtains some bits correctly and obtains erasures at some places. Now, the receiver is supposed to reorganize the bits that he got into two random sets, two random equal sized sets as follows. He creates a partition of the bits such that one of the sets consists of all bits that he received, sorry, only consists of bits that he saw. And the other partition can consist of erasures as well as some bits that he saw. So this ensures that if some, OK, so let me proceed with the protocol first. So now the receiver sends the indices corresponding to all these bits to the sender. The sender obtains indices 103 and 245. And now what he does is he exhausts his first input bit with B1, B0, and B3 corresponding to these three indices that he received, obtains the remainder bit R0, and sends it to the receiver. Then he exhausts his other input bit with B2, B4, and B5 and sends this to the receiver. And now it turns out that this protocol is secure. So OK, the receiver naturally, since he knows all three of the bits corresponding to at least one of these values, he can recover the particular secret. So in this case, his choice bit was 0. He can recover x0. So it turns out that this protocol is already secure. So since the sender doesn't know whether the receiver got an erasure or not, from the sender's point of view, the message that the receiver sends, which is these indices, appear to the sender like a completely random partition of the indices 0, 1 to 5. And in particular, the sender doesn't know whether this set contains the erasures or this one. And therefore, he cannot predict the receiver's choice input C. On the other hand, consider a malicious receiver who's trying to predict this other message of the sender. He cannot do that because even if he were using an antenna that only raised very few bits, it still guarantees that if you use sufficiently many bits, at least one of these bits will be erased, no matter how powerful an antenna the receiver uses. As long as it's not a completely reliable channel, there will be some erasures. And just one erasure is sufficient to ensure security because that one erasure, the bit corresponding to that one erasure, acts like the random mask and masks the other input of the sender. So that was a protocol for getting an oblivious transfer from any binary, from any erasure channel. Now moving on to our more interesting setting of an elastic binary symmetric channel. In a binary symmetric channel, Alice samples a random bit B and sends it over the channel to Bob. The channel samples an error variable from some distribution, adds this error to Alice's bit, and then sends the result to Bob. The security guarantee from a BSC is that neither Alice nor Bob can tell which bits got flipped. So now in the elastic setting, if the receiver was honest, then he got bit flips with probability alpha. And if the receiver was adversarial, then he got bit flips with a smaller probability beta that's less than alpha. And again, recall that we want to design a protocol that is secure even against an adversarial receiver, but is correct even if the receiver was honest and was actually getting bit flips with a higher probability. So our main theorem is that oblivious transfer can be constructed from all elastic BSCs whose beta and alpha lie in this range. And this range is in particular the one that was illustrated in the red area of the graph that I showed in the beginning. So now, before going into the techniques, let me give you some quick intuition about what the capacity of a channel means. Very intuitively, the capacity of the channel is a measure of how much information the output of the channel contains. The channel capacity is inversely related with the flip probability of a binary symmetric channel. So the higher the flip probability, the lower the capacity of the channel because the lesser the information that is going across. So if more bits are getting flipped, then it's sending less useful information across. And that's the intuition. So in case of our elastic BSC, if you look at the capacity corresponding to the channel alpha that flips bits with a higher probability, that channel has a low capacity. And the channel that flips bits with a lower probability, which is beta, has a much higher capacity. This is just an illustration of these capacities. For an example scenario, where the honest flip probability was 1 over 3, and the malicious flip probability was lesser, it was 1 over 6. So as long as the honest capacity is smaller than the malicious capacity, if you send information on these channels, a malicious receiver is always getting more than an honest receiver. And now we'd like to create some sort of disbalance to make sure that an honest receiver, at least for some special cases, is able to get more information than a malicious receiver gets overall. So here is the main idea behind that. Consider an elastic BSC that has honest flip probability 1 over 3 and adversarial flip probability 1 over 6. Let's start with the following idea. In order to send a 0, the sender creates two copies of 0s and sends them over two instantiations of this channel to the receiver. In order to transmit a 1, the sender creates two copies of 1s and sends them to the receiver over two invocations of this channel. The receiver can get one of four possible outcomes. If the sender was sending 0, 0, then the probability that the receiver obtained a 0, 0 is 2 over 3 times 2 over 3, which is 4 over 9. The probability that the receiver obtained 1, 1 is the probability that both bits got flipped, which is 1 over 3 times 1 over 3, 1 over 9. And when the sender sent 0, 0, the probability that the receiver obtained 1 of these two is also 4 over 9. On the other hand, if the sender sent a 1, 1, then the receiver obtains 0, 0 with probability 1 over 9, 1, 1 with probability 4 over 9, and 0, 1 or 1, 0 with probability 4 over 9. So from the point of view of the receiver, if he sees a 0, 0, it's much more likely that what was being sent was also a 0, 0. Because what is the chance that when the receiver got a 0, 0, the sender was actually sending a 0, 0. It is 4 over 5, because this ratio is 4 is to 1. And if the receiver saw a 1, 1, then the probability that the sender was sending 0, 0 is 1 over 5. So indeed, this channel can be seen as a new BSC of its own that has flip probability 1 over 5. On the other hand, the lower channel is such that when the receiver obtains 0, 1 or 1, 0, from his point of view, he cannot tell whether the sender was sending a 0, 0 or a 1, 1. Because this is equally likely to come from 0, 0, as it is to come from 1, 1. So therefore, if you use a 2 repetition encoding, then this behaves like a virtual combination of channels, one which is a BSC with flip probability 1 over 5, and the other is an erasure channel. And now the crux of our protocol will be to ensure that the honest receiver never needs to decode this part. The only thing that we will be concerned about with respect to an honest receiver who only wants one bit is that he should decode the first part. On the other hand, if we considered a malicious receiver, since he wants to obtain both inputs of the sender in an OT, he will have to decode things that appear on both these kinds of channels. And therefore, for a malicious receiver, the capacity or the flip probability that we consider will be the average of the information sent on both these channels. So therefore, if we use the repetition code of length 2 for our same running example, then the best honest capacity would increase, because now it becomes proportional to flip probability 1 over 5, like I showed you. And the average malicious capacity would also increase, but it would increase a little less. And then if we used a 3 repetition code, now the BSC above has an even smaller flip probability and an even higher capacity. So the honest capacity has increased even more, whereas the average of adversarial capacity also increases, but doesn't increase as much. So this is what they look like, again, for a running example, with a repetition code of length 3. Now if we use the code length 4, again, the best honest capacity increases, but the average malicious capacity doesn't increase as much. And you can see that the gap between them is decreasing. So if we keep doing these repetitions, there comes a stage at code length 7 where the best honest capacity is higher than the average malicious capacity for this example. And at this point, we are done, because we have ensured that there is some channel on which the honest receiver is getting more than the malicious receiver gets on average. But interestingly, this does not go through for all possible parameters beta and alpha. In particular, for some beta and alpha, there exists no repetition parameter if we keep applying this technique at which the green line exceeds the red line. And this is the limiting curve. So this denotes the range of values for which we can obtain OT, and beyond this curve, our technique fails. So our main theorem is that OT can be obtained from any elastic BSC whose parameters lie in this range. And with that, I'd like to conclude my talk. Thank you. Yeah. Thank you very much. We have time for some questions. Do you need to know beta when you're designing the repetition length, or is there just, okay. Yes. All right. Oh, I'm sorry, Yvonne. Thank you for a nice talk. There's a comment, there was news about this, because I have with Samar Vinalucci in Azure Concours on new work where we show that in fact, news is even better. You can have OT from any non-trivial parameters. So in your notation, any non-trivial choice of beta and alpha turns out to work. We also need to know alpha and beta. All I said was our technique doesn't work. Of course, yeah, sure, sure. You need something else, and I guess you developed that. So we just get constant rate. We can get constant rate just by using IPS kind tricks, but we don't try to optimize constant set up. No, we don't, but that would be really interesting to do. We just know that we get some constant. All right. So, if there are no further questions, then we can move on to the next talk.