 Hello, everyone. Welcome to this talk. Today I'm going to talk about computational hardness of optimal fair computation beyond mini-crypt. This is a joint work with Hamata-Machi. So in this talk, we will be talking about coin tossing protocols. So a coin tossing protocol is a two-party interactive protocol between Alice and Bob. We assume that parties exchange a total of R messages. And we assume that parties always agree on the output when the protocol ends. So fair coin tossing considered the following setting, where a malicious party may prematurely abort the execution of the protocol. So the fairness guarantees that the honest party should always receive the output of the protocol. Therefore, when the malicious party abort, the honest party should still output a defense as his output of the protocol. The unfairness is defined as how much a malicious party can deviate the expected output of the honest party. So let me first summarize the state of the Archer result through the lens of Impecliazo's five words. So Impecliazo proposed his famous five words by based on what hardness of computation is true and what hardness of computational assumption is false. So for example, in Pasiland, Pasiland is a world where even one with functions do not exist. And a mini-crypt is a world where one function and henceforth all the symmetric primitives such as commitment schemes exist, but public key primitives do not exist. And finally, cryptomania is the world where all the primitives such as public key encryption and oblivious transfers exist. So firstly, a two-party coin tossing protocol can be think of as a two-party zero sum game. Hence, information theoretically, there exists an attacker that imposes constant unfairness. And in his celebrated work, Papadimitriou showed that finding such attacks is piece-based complete. However, Heitner and Omri show that if one function do not exist, then you can simulate such attacks reasonably well and still achieve constant unfairness. If we consider a type of adversary called fail-stop adversaries, so such adversaries behave honestly during the execution of the protocol and their only malicious behavior is to prematurely abort. So in this groundbreaking result work by Cleveland impact Beato, they show that even for the fail-stop adversary, one can impose a one-by-square of our unfairness. So intuitively, their result can be think of as a reverse theorem for a zoomers inequality, where if you have a martingale that goes from half to zero and one, there must exist a step where the jump in the expected output is at least the one-averse root of R. So in 1980s, after a sequence of important works, we have constructed this protocol, namely the majority protocol, where we know that for majority protocols is one-by-R unfair against the fail-stop adversaries. So majority protocol matches the lower bound proven by Cleveland impact Beato. And if you assume one of the functions exist, then you can use commitment schemes to upgrade such protocols to be secure against the fully-fledged adversaries. So in another celebrate work by Cleave, he showed that any coin tossing protocol is at least one-over-R unfair. So Cleave's result is very strong in the sense that regardless of what computational assumption you assume, any coin tossing protocol is at least one-over-R unfair. So one-over-R unfair coin tossing protocol is called optimal fair coin tossing. So for a long time, we do not know whether optimal fair coin tossing exists or not. And it is surprising when Moran, Nao, and Suki finally show that by relying on oblivious transfer, you do can construct a optimal fair coin tossing. And finally, very recently, our Crypto20 work showed that any optimal coin tossing protocol using one-over-function in a black box manner is at least one-over-R unfair. So let me now color-coded this table into green colors and red colors. So here, all the green constructions in the green colors are those protocols that are one-by-root R unfair, and also the adversarial attacks are those adversaries that can impose a one-over-root R unfairness. On the other hand, the red cell here MNS protocol is a protocol that achieves one-over-R unfairness, and the cleaver attack is also an attack that imposes one-over-R unfairness. So given this state-of-the-art result, it is a natural question to ask whether oblivious transfer is necessary for optimal fair coin tossing. So for example, can we construct one-over-R unfair coin tossing by relying on public key encryption? Also, it is another natural question to ask, does there exist fair coin tossing with intermediate unfairness? For example, maybe relying on public key encryption, we can find a coin tossing protocol that achieves one-over-R to the three-over-four unfairness, which is strictly between one-over-R and one-over-root R. So in this work, we rule out such possibilities. So firstly, we show that any coin tossing protocol that uses public key encryption in a black box-backed manner is at least one-over-root R unfair. And additionally, we consider the setting where parties not only have public key encryption, but they also have access to a trusted party that realizing some functionality, possibly randomized functionality F. So this is namely the F hybrid model. So we've showed that even given F hybrid, we're assuming that this F hybrid does not facilitate oblivious transfer, then even this protocol, coin tossing protocol, is at least one-over-root R unfair. So we completely squeeze out all the room here, possibility that to construct a fair coin tossing protocol by relying on other assumptions. So note that this F hybrid could be potentially useful for achieving various tasks. Here we are showing that F hybrid is completely useless for the task of fair coin tossing. So what I have shown you here might trick you into believing that we have resolved everything. Well, we haven't. So before I will go further, let me stress what are the open problems that we did not prove. So for example, one thing that we did not prove is that we did not present a set of oracles relative to which a secure protocol for F exists, but optimal fair coin tossing protocol does not exist relative to this set of oracles. So in other words, we did not prove a black box separation between securely realizing F and optimal fair coin tossing. We are only giving parties access to a trusted party realizing F. So the difference between these two settings is that when party are given access to a set of oracle that facilitates function F, they might not use this set of oracles in ways. They might not use it only to evaluate F. They might use this set of oracles in other ways. So this is why this problem is very challenging and we did not prove this. Let me stress that if one does prove such a black box separation, then it implies a black box separation between securely realizing an incomplete functionality F and oblivious transfer. This is one of the major open problems in the field, and it's incredibly challenging. So what we prove in this work can be seen as a partial progress towards this automated goal. Finally, let me compare our work with a relevant work by Hattner, Omri, Hattner, McCray, Yannis and Omri. They prove that for any constant R, the existence of R coin tossing protocol with unfairness less than 1 over root R implies the existence of key agreement protocols. So their result is incomparable to ours as they prove a stronger consequence, but only for constant round protocols. So now let me set up our model, tell you what our model is. So in our work, we define a set of oracles O that facilitates public key encryption. That is, and Bob have oracles that access to this oracle O, and additionally, they have access to a trusted party, realizing F, so they can send their respective input X and Y to the trusted party, and then receive the evaluation from from the trusted party. It is important to note that this function F is realized unfairly. So that means the adversary gets to receive the output first, and then he may abort the protocol after receiving the output. And by doing so he blocks the output delivery to the honest party. So the reason why we assume this F is realized unfair is because if you are given a fair access to a fair functionality F, then perfectly fair coin tossing is possible. So for example, assuming this F is the functionality X or now consider this very simple protocol, where Alice just sample her input randomly. And Bob also sample sample his input randomly, they just send their input X and Y to the trusted party, and receive the output as received output from the trusted party and output it as the output of the protocol. You can prove that this protocol is perfectly fair. So therefore, we assume this F is always realized unfairly. So given this model, what we prove is that there exists a fair stop adversary, who could deviate the expected output of the honest a party by one over root R. And this adversary ask at most upon normally additional queries to their random to the or to Oracle. So our proof follows largely from the prior works of margin one. 2020, where in in that work, we show we present a fair stop attacker that deviate the expected output of honest party by one over root R for any coin tossing in the random Oracle model. So what we observe in this work is that we note their attacker generalizes to other settings, as long as one can ensure the following invariant. So this invariant is that if if we have that Edison Bob's private view are always close to independent condition on the partial transcript. As long as this invariant always hold during the execution of the protocol, then their fair stop attacker will always work. So for example, in the random Oracle model, they ensure this invariant by relying on the well known technique called heavy career in the random Oracle. So our separation from public key encryption relies on this result from Muhammad E. Marji and Prabhakar Prabhakaran from TCC 2014. In this work, they define a set of Oracle that facilitates PKE. And they show that for any two party interactive protocol between Edison Bob, there is a what they call a common common information learner that ask a polynomial many queries to ensure that Edison Bob's private view are close to independent. So their results together with the attacker from Marji and the one 2020 ensures that gives the result that any coin tossing protocol that uses public key encryption in a black box manner is at least the one that will route our unfair. So finally, we prove this dichotomy result for F hybrid. So given any randomized functionality F, one can ask the following question, does F hybrid facilitate oblivious transfer or not. So if the answer is yes, then such functionalities are called complete functionalities. And since you can implement OT in F hybrid, then you can implement the M&S protocol. Recall that M&S protocol is a protocol that is optimally fair by relying on oblivious transfer, then with access to the F hybrid that facilitated OT, one can achieve optimal fair coin tossing in the F hybrid model. On the other hand, if the answer to this question is no, that means F run this functionality F is incomplete. Then, by this beautiful work of Kilian, who showed that who gives such a characterization that for any such incomplete functionality F, condition on the partial view, a condition on the partial transcript, Edison Bob's private view are always independent. Then the invariant always hold and hence the margin of one's attacker will therefore work. So any coin tossing protocol in F hybrid where F is incomplete is at least one the worst growth of our unfair. So therefore in F hybrid, there are only two possibilities where F is complete, then optimal fair coin tossing protocol exists. Or if F is incomplete, then F hybrid is completely useless for the task of fair coin tossing. So let me provide you some additional perspective on why this result could be technically challenging. So for an incomplete functionality F, there could be two possibilities. First, there might exist a T round secure protocol for F. Then you might want to replace this F hybrid with this T round protocol that realizes F. However, once you replace this one round access to F hybrid with a T round secure protocol, then the round complexity of the protocol get blow up by a factor of T. Hence, just to rule out a fair, just to rule out optimal fair coin tossing in the plain model is not and is not sufficient to prove that F hybrid optimal fair coin tossing is impossible in the F hybrid model where F might have a T round secure protocol. On the other hand, this in for an incomplete functionality F, there might not exist a security secure protocol for F. And this F hybrid have been shown to be useful for other tasks. For example, Rosalek and Shirley show that F hybrid could be useful to securely realizing some other functionality G. And here for the task of coin tossing for fair coin tossing, we show that F hybrid is completely useless. So with that, I'd like to conclude my talk here, and I will refer you to the full version of our paper for more details. Thank you.