 So, the next talk is the same set of authors, Dakshita, Hemanta and Amit, are joined also by Daniel Krashevski and Manoj Prabhakaran, and Dakshita will be giving the talk. It's on all complete functionalities are reversible. Thanks again for the introduction. So this is joint work with Hemanta and Amit and two other authors, Daniel Krashevski, who is here in the audience, and Manoj Prabhakaran. So again, I'll start with a similar background on secure two-party competition. Suppose there's two parties who each have secret inputs and they'd like to evaluate a function on these inputs. This function generates outputs for both of the parties and sends it to them. Suppose they had access to a trusted third guy who did this job for them, then things would be easy, because even if one of them became adversarial, they would be able to learn nothing about the input of the other party, and this would indeed be a secure realization of the function. But in the real world, parties may not have access to the trusted guy, and therefore they need to algorithmically emulate it so that even if one of the parties becomes adversarial, the other party's input still remains hidden. And this turns out to be impossible in the plain model, and also oblivious transfer is a necessary and sufficient assumption to perform secure 2PC or MPC in general. So what is oblivious transfer? I already told most of you who attended the talk before, but just very quickly going over the same things again. Two parties, the sender has inputs x0, x1, the receiver has a choice bit, which helps him to choose one of the sender inputs that he gets as output, and he doesn't learn anything about the other input of the sender. So the security guarantee is that the sender doesn't learn the receiver's choice bit, and the receiver doesn't learn the other sender message. And it is known that OT implies secure computation for all functionalities. So I'm going to go ahead and ask a natural question now. Suppose two parties had access to a trusted guy who computed for them a fixed functionality F. We will say that they both have oracle access to the ideal functionality F. But Alice and Bob are not interested in evaluating F. What they want to do is evaluate a different functionality G on some different sets of inputs, x star and y star, for Alice and Bob, respectively, and get some different sets of outputs. So the question that we ask is, is it possible for them to securely realize G with just oracle access to F such that they run some protocol with their inputs and make oracle calls to F in between and send messages to each other such that, at the end, Alice ends up with output W star and Bob ends up with output Z star. We will denote this by F implies G. And the question of whether G can be securely realized with oracle access to F is a central question in MPC research. And this has been widely studied for the special case when G is the oblivious transfer functionality. In fact, just like even in the previous talk, we were studying whether we can get OT from functionalities or not. And recall that OT actually implies secure computation for all functionalities. And therefore, if any functionality implies OT, it also implies secure computation of any functionality. And that's why this special case has been interesting so far. And this has been studied very widely in literature. But in particular, for deterministic functionalities, Kylian studied it first. And then for channel kinds of functionalities, it was studied by Krepo and Kylian. And then there were lots of follow-up works. And then for the setting of general randomized functionalities, it was studied in these works, the most relevant of which is this recent work of Kraschevsky, Maji Prabhakaran, and Sahai, who show that any randomized functionality that fulfills certain properties can be used to perform OT. And all functionalities that do not fulfill these criteria do not imply OT. So the focus of our work is a related question about realizing one functionality from the other. And it's as follows. The question is, suppose Alice and Bob have oracle access to the ideal functionality F, then can they use it to implement the reversed functionality? Where Alice is the one who sends input Y, Bob is the one who sends input X, Bob gets output W, and Alice gets output Z. So this sounds like a really natural question. When can you reverse a functionality securely? So if we pause for a minute, I just told you that there are lots of functionalities that we know already imply OT. And we also know that OT implies the secure computation of any functionality. So in particular, if you have OT, you can also definitely realize the reverse of F. So doesn't it make this question very trivial? Actually, that is not the case. Prior works which show that F implies OT need to use both F and reverse of F. So this doesn't answer our question at all. Because most prior works that realize OT from F make use of F in both directions. So and for this work, obviously, we cannot assume that if parties have access to F, they also have access to reverse of F. And for this talk, we will say that a functionality is fixed role when the roles of both parties are determined. And having access to F doesn't necessarily mean that you can access the reverse of F. So this question has been studied for the special case of oblivious transfer. And in particular, Krepo and Santa, and then Wolf and Wulschleger, showed how to reverse oblivious transfer. But beyond oblivious transfer, it has hardly been studied. And it becomes more intriguing in the case of highly asymmetric functionalities, where, for example, Alice has a powerful transmitter. And Bob has a very weak receiver. And they use this to set up an erasure channel between themselves, such that Alice transmits some bits to Bob. And Bob gets some bits erased. Alice doesn't know what bits got erased. And if Bob got an erasure, he doesn't know what Alice sent. So now it's a very interesting question to see if it's possible to establish an erasure channel in the reverse direction, where Bob is the one sending the bits. Alice is the one who gets bits dropped. And Bob doesn't know what bits got dropped. And Alice doesn't know the bit when it got dropped. And indeed, we want to do this only with this transmitter that Alice has and this weak receiver transmitter that Bob has. So in fact, our results show that even for the setting of elastic, even for the setting of erasure channels, it is possible to reverse them. And to do that, the crux of the problem lies in using a functionality in a fixed direction to realize commitments both from Alice to Bob and also from Bob to Alice. So indeed, if we were able to do this, then just because of prior work, if F implies commitments from A to B and from B to A, then F actually implies OT without the need to reverse F. And obviously, this is in a malicious setting. And therefore, F can be securely reversed. So our goal will be to show that F implies these commitments for a large class of functionalities that we will call non-simple. And the actual technical definition of what non-simple is is slightly complicated. So I will just explain it with examples. A coin flip is a simple functionality where a coin comes in the universe and outputs a random bit B to both parties. This is simple because all the information that parties have after the coin has been flipped is public. And there is no hidden correlated randomness. On the other hand, our favorite functionalities, a noisy channel, is a non-simple functionality where Alice sends some input M into the channel and it gets flipped with some probability. And Alice and Bob have no clue whether the bit got flipped or not. So now there is an implicit hidden information in the channel. And that relates to the views that both parties obtain at the end of the protocol. So Krastevsky at all showed that simple functionalities do not imply OT in the malicious setting, whereas non-simple functionalities do imply OT. So again, recall that our main goal is to show that for all non-simple functionalities, one can obtain commitments in both directions. And I'll illustrate our techniques for obtaining these commitments with the help of some examples. So in the first example, Alice and Bob have our favorite erasure channel that erases Alice's bits with probability 1 third. So now suppose Alice wants to use this to commit to a message M. Again, this protocol is implicit in a lot of prior work, but I'm just building intuition. She picks a random vector D, applies some error-correcting code to it, gets an output, which is a bunch of random bits, and now sends these over the erasure channel to the receiver. So the receiver gets erasures at some places, and he gets some of the bits reliably. Now to commit to her message M, she computes a hash of this vector D and uses it to mask her message and generates this string C, which she sends as her commitment in the clear to Bob. So this is the end of the commitment phase. During decommitment, Alice reveals both her message and the sequence of bits to the receiver. It turns out that this is already a secure protocol. And the reason is we choose an error-correcting code that has minimum distance n over 10. So now note that Bob got n over 3 erasures, where n is the total number of bits, which is proportional to some security parameter. So Bob got some n over 3 erasures, whereas the error-correcting code corrects about n over 10 indices. So there is no way that it can cover up all the erasures that Bob got. And in fact, we can show that there is sufficient entropy in Bob's view about what Alice sent, so that using a good hash, this entropy can be extracted, and we can get statistical hiding. What about binding? So suppose Alice wants to change her message M after the decommitment phase. To do that, she needs to change D. And in order to change D, she needs to change this value that was obtained as an error-corrected code applied to D. But these bunch of random bits that she obtained cannot be changed, because the moment she tries to change even one bit, she gets caught with probability 2 thirds, because she doesn't know whether Bob obtained an erasure or not. So Bob can be checking whether she is claiming the correct bit or not during the decommitment. So therefore, for every bit that Alice lies about, she gets caught with some probability. And a simple turnoff bound tells us that if she tries to lie about n over 10 bits, she gets caught with overwhelming probability. If she tries to lie about less than n over 10 bits, the minimum distance of the error-correcting code ensures that she cannot change the vector D. And therefore Alice is committed to a bit. So this is a secure commitment protocol. And let's try to abstract out a property of this protocol, because we want to generalize it to other functionalities. So if you look at the matrix of this protocol, where these are Alice's inputs and the column titles contain Bob's outputs, then when Alice sends input 0, Bob obtains output 0 with probability 2 third and obtains an erasure with probability 1 third. On the other hand, when Alice sent input 1, Bob obtained an erasure with probability 1 over 3 and obtained a 1 with probability 2 over 3. So we just write it down in matrix form. And here is the property. In fact, sorry, before going to the property, note that we can replace Alice's inputs with her views, because of course it's OK to instead of write Alice's views here, instead of writing Alice's inputs here, write her entire view. And a view is just the input and output that the erasure channel gave Alice. So now since Alice obtains no output from the channel, her view is just 0 and BOT and 1 and BOT. And similarly, we can write Bob's views, but since Bob has no inputs, his views just become BOT 0 and so forth. So the property that we abstract out is that there are two views of Alice corresponding to input 0 and input 1, which intersect with the same view of Bob. So that whenever Bob has this input, which is an erasure, he has no clue whether Alice had this input or this input. And this in particular is what is giving hiding in this scheme. So it seems like if we look for this property in any functionality, that there are two Alice's views that intersect with the same Bob views, we should be able to get commitments from general functionalities. But this is not true. So remember, I told you that we want commitments both from Alice to Bob and back from Bob to Alice also. And we want to use the erasure channel in the same direction, going from Alice to Bob, to get a commitment from Bob to Alice. So now let's try to look for the property that I just mentioned. In order to get a commitment from Bob to Alice, let's look for two views of Bob that intersect with the same view of Alice. In this example, when Bob got an output 0 and when he got an output and erasure, Alice's view corresponding to both these could have been input 0. So it seems like we can try to build a protocol now. We can ask Bob to restrict himself to these two views, so output 0 and output erasure. And in particular, he just selects these views and tells Alice, hey, these are the views that we'll work with. Let's throw away everything else. So Alice gets restricted to these particular values corresponding to these same indices that Bob picked out. But there is already a problem with this protocol. And in fact, just saying that two Bob views intersect with the same Alice view doesn't work directly. Bob could be doing the following. Because he wants to violate binding eventually, instead of picking erasures, he just picks indices. He never picks any erased index and just picks indices where he either got 0s or 1s. And it's easy for him to have gotten a 1 and claim that he actually obtained an erasure. Because obviously, a 1 output is much more informative than an erasure is. So if we let Bob throw away indices this way, there is a problem. And he can claim that he received an erasure by picking a 0 input and 0 output and a 1 output, each with probability half. But I'll not go into too much of that. The only issue is that this actually does translate to an actual attack on a commitment scheme that we would devise this way. And therefore, in order to fix this, what we will do is we will not let Bob throw away any of his views. So Bob is going to have to use all the views that he obtained here, the 0s, the 1s, and the erasures. And Alice similarly has to work with all the views. Now, Alice is also going to check during decommitment that when Bob says he got some 1s, some other couple of 0s, and some erasures, that they are in some approximately correct ratio. For example, if the erasure channel erases 1 over 3 indices, Bob should not be claiming that nearly all of his indices got erased, because then Alice knows something is wrong. So we add these kinds of checks. And now, suppose Bob actually obtained an output 1 and tries to say that he obtained an erasure. The previous attack fails. And that is because if he obtained a 1 and he tries to claim it was an erasure, that just means that he's claiming too many erasures, unless he's claiming some other erasures to not be erasures. So suppose he actually obtained an erasure and tried to claim that it was not an erasure. Instead, it was a 0 or a 1. Then he gets caught, because by the property of the elastic channel here, when he has an erasure, he has no clue whether Alice sent a 0 or 1. So therefore, by this weight balancing argument, it's possible to ensure binding and hiding in this situation. It turns out there are many more issues here. For example, in this easy example, Bob does not have any inputs. So he has no, sorry, he had no control over what views he obtained. But if Bob did have an input, then there turn out to be more complications, which we handle in the paper, and please refer to that for details. Then there's another more sophisticated example scenario, which shows up, which has more problems and needs an entirely new different kind of protocol to handle. So in the paper, we show that there are basically three main cases into which functionalities can fall, and for each of these cases, we devise protocols. Please refer to our paper for these more detailed protocols. So now I'd like to conclude by summarizing our results. We show that given a fixed role, non-simple functionality F, in the malicious setting, it is possible to obtain unconditionally secure commitments, both from Alice to Bob, and from Bob to Alice. It's also possible to obtain unconditional UC security without the need to use the reverse of the functionality. And finally, we show that any non-simple F, or basically any F that implies OT, can be reversed at constant rate. Finally, the open problems. So we do show that any non-simple F can be reversed, but it's not clear whether all simple functionalities are reversible or not. So it's an open problem to study which simple functionalities are reversible, and therefore obtain a complete characterization of functionalities that can be reversed. A second, so moreover, like I already told you, we achieve constant rate reversal, but it is interesting to see if you can use one copy of a functionality, or basically one oracle access to a functionality to get the reverse functionality. Indeed, for the special case of OT, this was shown to be true by Wolf and Wiltschleger, but we don't know whether this can be done for other general functionalities, and that would be interesting. So thanks again, and let me know if you have any questions. Thank you very much, two lovely talks. We have time for one or two questions. Yes, yes, yes. Randomize non-reactive functionality. Yeah, all right. Well, let's thank the speaker one more time.