 Hello everyone, I'm Susumu Kiyoshima from entity research. I'll talk about my work about black box impossibilities of two rounds weeks of knowledge and the strong type of life. So the focus of this work is the proven privacy of interactive proof and argument. So in this work, we cross the interactive proof and argument that satisfy not only completeness and soundness but also some kind of security against cheating very far. So the most important proof of privacy notion in cryptography is of course the knowledge, which guarantees that for any malicious barrier that exists a simulator such that the view that it computed by the simulator is indistinguishable from the view of the barrier for interact with the proof in a real interaction. So a good point about the knowledge is that it guarantee really strong security. And because of this powerful security, zero knowledge has been used in a huge number of applications in cryptography. However, a problem about zero knowledge is that it is too strong to achieve in two rounds. So in particular, it is known that the two rounds of knowledge proof and argument cannot exist for any non-to-be languages. And this impossibility will hold even when we consider non-black box zero knowledge. So to avoid this impossibility, previous works have considered several week proof of privacy notions that are weak enough to avoid the impossibility, but strong enough to be used in many applications. So the most well-known example of such a weak proof of privacy notion is the witness indistinguishability, which guarantees that for any statement and any two witnesses, as a proof generated with the first witness is indistinguishable for the proof generated with the second witness. So the right is indeed much weaker than zero knowledge. For example, it does not guarantee any security when we consider language with a unique witness, but still WI is known to be useful as a building block. And in addition, it is known that the WI can be achieved in two rounds under standard assumptions. And as a WI, there are several more examples of weak proof of privacy notions. Basically, all of them are stronger than WI, but weaker than zero knowledge. And in this work, we've focused on two of them, strong WI and weak zero knowledge. So first, what is weak zero knowledge? Roughly speaking, weak zero knowledge is obtained by switching the order of quantifier in the definition of zero knowledge. So recall that the standard zero knowledge required that for any barrier, there exists a simulator, such that for any distinguisher, the real simulation and distinction. So in contrast, weak zero knowledge only requires that for any barrier and any distinguisher, there exists a simulator, such that the real and the simulation are indistinguishable. So the different forms of standard zero knowledge is that now the simulator is allowed to depend on the distinguisher. So in applications, weak zero knowledge is useful when, for example, it is used as a building block of some large protocol having some indistinguishable security definitions. And this is because in such applications, typically, we only need to show that the output of a certain intermediate hybrid doesn't change when we switch the real and zero knowledge execution to simulation. And so by considering such hybrid as this distinction, we can replace the standard zero knowledge with the weak zero knowledge. And next, what is strong WI? Roughly speaking, a strong WI guarantees that proved about two indistinguishable statements are also indistinguishable. So specifically, it guarantees that for any two distribution of a statement between a sphere, if statement chosen from these two distribution are indistinguishable, then proof generative for these statements are also indistinguishable. So a key difference from a standard WI is that strong WI is meaningful even when we cross the unique witness relation. And so strong WI can be used, for example, to prove a statement about statistically binding commitment. So in previous works, it has been shown that we can obtain two round construction for both weak zero knowledge and strong WI. And in particular, two round construction are obtained both in the delayed input setting and the standard non-delayed input setting. Where in the standard setting, the statement is fixed at the beginning of the protocol, whereas in the delayed input setting, the statement is chosen by the prover in the last round of the protocol. And what is important about these two settings is that the delayed input setting is not strictly stronger than the standard non-delayed input setting. And this is because also the delayed input setting consider soundness against stronger cheating prover who chooses a statement to prove in the last round adaptably, it consider prover privacy and get the weaker cheating purifier who need to choose a first round message without knowing the statement. So when we first result in these two settings, we need to consider them separately. And although this existing positive result about two round construction are really great, a weak point about them is that all of them are based on superpolyard assumption, such as assumption against the sub-exference time adversary. And this is in contrast to the case of the standard WI because for standard WI, we have protocol with two round and the polynomial hard assumption, such as trouble the computations. So in this work, we study if it's a use of a superpolyard this is necessary for two round weeks of knowledge and the strong WI. Now let me explain our result. So at a high level, we showed that it is impossible to obtain two round weeks of knowledge and strong WI using standard techniques and the standard polynomial hard assumptions. So to explain the result more formally, we first need to formalize what the standard techniques and the standard assumption mean. And in this work, we formalize them by using the notion of blackboard deduction and the falsified assumptions respectively. So what is a falsified assumption? So roughly speaking, an assumption is called falsifiable if it is modeled as an interactive game between a polynomial time challenger and the polynomial time adversary. So you can easily check that essentially any assumption that is considered standard in corrective graph is indeed falsifiable. And the example include the one-way function, originally did hash function and the LSA, DDH and the LW assumptions. So next, what is a blackboard deduction? So first consider the case of soundness and consider the setting where we are trying to prove the soundness of some interactive argument based on some assumption. Then a blackboard deduction for soundness is a polynomial time oracle machine such that for any cheating prover that breaks the soundness of the protocol, the deduction can break the under any assumption by using the cheating prover as the oracle. So essentially the reduction is blackboard in the sense that it uses the cheating prover as a blackboard. And of course, the blackboard deduction can be defined for other security notion as well. And in particular, the blackboard deduction for sound avoid can be defined similarly. And I know that the blackboard deduction use very widely in cryptography and actually almost all security proof in cryptography use blackboard deduction if it's implicitly. Now I'm ready to expand our result and let me start from the case of week zero hours. First, I know that the impossibility of a three round blackboard zero knowledge also hold for week zero knowledge in the standard setting. So we already have a blackboard simplicity for two round week zero knowledge in the standard non-deleted setting. And in this work, we give a blackboard simplicity in the deleted setting. And in particular, we show that the two round week zero knowledge is impossible if soundness is proved by blackboard deduction based on foreshad assumption. And I know that this result hold even when non-blackboard techniques are used in the proof of week zero knowledge. And next, let me expand our result of our strong WI. So first, we consider the standard setting and we show that the two round strong WI in the standard non-deleted setting is impossible if strong WI is proved by blackboard deduction based on foreshad assumption. And this result hold even when non-blackboard reduction are used in the proof of soundness. Next, we consider the deleted setting and we show that the two round strong WI in the deleted setting is impossible if either strong WI is proved by blackboard deduction based on foreshad assumption. And we consider the protocols that have public verifiable ability or both soundness and strong WI is proved by blackboard deduction based on foreshad assumption. So in this work, we show impossibility for the case that strong WI is proved by blackboard deduction. But I need to note that this result actually requires that the blackboard reduction for strong WI are blackboards in a strong sense. And in particular, we require that they satisfy property called obliviousness, which is defined as forward. So recall that the in strong WI, we compare two proofs where the statement that we see as a sample from two different distribution. So when we consider malicious verify against strong WI, we also need to specify from which distribution the statement of witness to sample it. And then, roughly speaking, a blackboard deduction is called oblivious if it is a black box, not only about the verifier, but also about the distribution. So in particular, we require for any two distribution about statement of witness and for any verify against strong WI if the verify breaks strong WI with respect to these two distribution, then the reduction breaks under any assumption or it distings the distribution over the statement. So the only additional restrictions that we impose here is that a single reduction work for all distribution. And this restriction is actually pretty natural. And in particular, all the reduction that we know for strong WI satisfy this restriction because we currently don't know any techniques that use number of access to the distribution in non-turbule way. So even with this restriction, we believe that our impossibility result are still meaningful. Okay, so these are our results. So next, let me explain what our result imply. First, the bad news is that to obtain two round protocol and the polynomial hard assumption, number of techniques are necessary for both the case of weak zero knowledge and the case of strong WI and in both derivative setting and the standard non-derivative setting. Next, good news is that two round weak zero knowledge on strong WI and the polynomial hard assumption are still not ruled out. And this is because it might be possible to avoid our impossibility by using number of acts, number of techniques. So in particular, the positive result by Bitanski-Kurano-Panin uses the number of techniques for provable privacy in the standard setting. And even though currently the techniques are based on super polynomial harness, we do not know whether the use of super polynomial harness is such a setting is essential. So it might be possible to obtain positive result in such a setting by improving the techniques. All right, so in the rest of this talk, I'll explain our techniques. And let me first explain the techniques for the result of weak zero knowledge and then explain the technique for the result of strong WI. So for weak zero knowledge, we obtain our impossibility in two steps. So in the first step, we use the result by Chun, Louis, and Perth to observe that weak zero knowledge imply another weak home zero knowledge called the tape zero knowledge. So the precise definition of the tape zero knowledge is not important for this talk. And what is important is that the tape zero knowledge is defined with the same of the bond periods as the standard zero knowledge. Namely, it is defined in the form of, for any barrier, there exists a simulator that is only for any distinguisher, blah, blah, blah. Then in the second step, we observe that the sense of tape zero knowledge is defined with the same order of quantifiers as the zero knowledge. We can obtain a black boxing possibility of two-round-deleted tape zero knowledge relatively easily by using techniques used in previous black boxing prosperity of other two-round protocol, such as two-round space zero knowledge or snack. So actually the proof of this result does not use many new techniques. Essentially the main point is that it is already known that the order of quantifier in the definition of weak zero knowledge can be switched back to the normal order. So in this work, I want to explain the techniques of results in more detail. And rather in the rest of this talk, I will focus on explaining our technique for proving the black boxing prosperity of a two-round-stone-double-I. So first, for simplicity, let us consider the case of a non-intellectual-strong-double-I. So for non-intellectual-strong-double-I, the formal statement that we prove is the following. So assume the existence of CCC get public encryption, then the existence and belong to L that satisfy the following. If a strong-double-I of a non-intellectual argument for L is proved by using an oblivious proper reduction based on force assumption, then such assumption must be forced. So in other words, we show that unless we use force assumption, we can not obtain non-intellectual-strong-double-I using proper reduction and the force assumption. And now to prove this statement, we use a well-known meta-reduction technique. So recall that our goal is to show that if there exists a black-ball reduction for non-intellectual-strong-double-I, then the underlying assumption must be forced. So to our showing this, we show that any such black-ball reduction can be used to efficient to break the assumption, which implies that the assumption is forced as desired. So let me explain this approach in a little more detail. And first recall that the oblivious proper reduction for strong-double-I guarantees that for any two distribution over statement witness and for any verify against strong-double-I, if the verify successfully breaks strong-double-I, then either the reduction breaks the underlying assumption or it's distinguished to distribution over statement. Then in the first step of a through, we design a specific language, distribution over statement witness, and the verify against strong-double-I, such that even though this particular verify successfully breaks strong-double-I, the reduction cannot distinguish the distribution over statement when it is given access to this particular verify. So once we show this, it follow from the proper reduction that when the reduction is given access to this particular verify, it must break the underlying assumption since it cannot distinguish the distribution over statement. Then in the second step of a through, we show that the reduction and the verify that we have designed can actually be executed in polynomial time. And this means that we can break the underlying assumption in polynomial time so we can conclude that the assumption must be forced as desired. So now let me explain each step in more detail and let me start from the first step. So recall that in this step, our goal is to design a language, distributions, and the verifier, such that the verifier can successfully breaks strong-double-I, but the reduction cannot distinguish the distribution over statement. So we consider the setting where the reduction is given a success statement from the challenger and it tried to guess from which distribution the statement sampled and the reduction is given access to a verifier as a oracle. And the first, as a woman, let us consider a simple case where after receiving the statement from the challenger, the reduction ship put forward the statement to the verifier along with some through that it created by itself. Now the first observation is that without the loss of generality, we can assume that the verifier abort if the reduction send a proof that it's not accepting. And this is because achieving verifier against strong-double-I is support to interact with the onus gruber which always give access through. So a verifier can break strong-double-I even when it abort with anybody receiving the proof that it's not accepting. So this observation implies that for the reduction it's access to the verifier is useless unless it send an accented proof to the verifier. Now the key point is that if the reduction can indeed generate access to proof for the statement is that it is saved from challenger, then it can be used to break the soundness. And this is because if we consider language such that the true and false statement are indistinguishable then it follows that even when we give a false statement to the reduction, the reduction still send an accented proof to the verifier and this clearly contradicts soundness. So in the simple case where the reduction simply for the statement from the challenger to the verifier we can show that the access to the verifier is useless. However, the reduction does not necessarily always simply for the statement to the verifier. And in general it can send any statement to the verifier. And this statement might be somehow correlated with the statement that the reduction receives from the challenger. And to handle this case, we design language and distribution that have some kind of non-variability by using the CCSK encryption. So specifically, we consider language L that consists of a public public case hypertext pair over CCSK PKE such that either zero or one is encrypted in the CYPHOTX. And also for each public KPK and the binary value B we consider distribution XPKB that output the random encryption B under the public KPK. And I know that it is easy to see that the distribution XPKB is a distribution of true statement of the language L. Now, given these language and distributions, the question we consider is that for random public KPK whether the reduction given any success changing verifier about the distribution can distinguish the distribution XPK0 and XPK1. And the answer to this question is no. And this is because we can design a specific cheating verifier that breaks stonda boy by using the decryption or a code of the CCSK PKE. So indeed, for the particular language and distribution that we have designed, a verifier against stonda boy is required to distinguish a proof, generate for increase from zero and the proof generate for increase from one. And by emulating the decryption or a code the verifier can easily distinguish these two cases by just querying the statement to the decryption or a code. Then since this means that the making query to this particular cheating verifier is essentially equivalent to making query to the decryption or a code, the security of the CCSK PKE directly is that the access to this particular cheating verifier is useless to distinguish the distribution XPK0 and XPK1 as desired. So now let's go to step two. And from step one, we know that for random public TPK we have a success cheating verifier about distribution DPK0 and DPK1 such that the reduction given the verifier must break the assumption rather than distinguish the distribution of a statement. And now the key observation is that for any public TPK the verifier that we have designed can be executed in polynomial time by using a corresponding secret key SK and this is because the verifier that we have designed just uses a decryption or a code. So we can conclude that we can efficiently break the assumption by using the reduction of our verifier. And so we can conclude that the assumption must be forced or desired. And this is approved for the case of non-interactive strong double Y. Now from now on, I will briefly explain how we can extend our technique from non-interactive strong double Y to two-round strong double Y. And let me first explain the case of the standard non-deleted boot setting. So it turned out that the only problem we have in this setting is that in the case of two-round strong double Y, the reduction might rewind the verifier. And in particular, the reduction might request a verifier to be used to the randomness for different statement. So fortunately, in the standard non-deleted boot setting this problem can be solved very easily. And the key point is that in this setting the statement is decreed before the first verifier message. And so if because the verifiers are computed by randomness by prime PRF on the statement we can effectively prevent the use of randomness for different statement. So with this modification, we can prove the impossibility of two-round strong double Y in the standard setting just like non-interactive strong double Y. And next let's consider the two-round strong double Y in the delay deep setting. So the problem is the same as before, namely the problem is that the reduction might request a verifier to reuse randomness for different statement. And since in the delay deep setting, the statement is chosen after the verifier message, we cannot solve this problem by using PRF anymore. And in this setting, we use a slightly different approach from the case of non-interactive double Y, non-interactive strong double Y. And in particular, we show that the strong double Y imply a weak form of weak zero knowledge. And then we reuse our impossibility of weak zero knowledge. So the detail of this approach is a bit involved. So I won't explain the detail in this talk. So if you are interested, the detail can be found in the paper. Finally, let me give a conclusion. So in this work, we give a black box impossibility about obtaining two-round weak zero knowledge and strong double Y from a polynomial hard assumption. So our takeaway message is that you will obtain two-round weak zero knowledge and strong double Y from polynomial hardness. You have to use non-black techniques at least either for soundness or prove a privacy. So this concludes my talk. So thank you for listening.