 Welcome to the session about zero-knowledge protocols. And the first talk is going to be about the existence of free-round zero-knowledge proofs by Niels Fleischhacher, Yipul Guryan, and Abhishek Jain. And Niels will be giving the talk. OK. Thanks for the introduction. So the round complexity of zero-knowledge proofs has seen a lot of research over the years. I want to empathize that don't worry, it happens every talk. I'm talking about proofs here, not arguments. So we're talking about protocols that are sound even against unbounded brewers. We've known a couple of things about the round complexity for a while. Goldberg and Oren showed that two-round zero-knowledge proofs can basically not exist for interesting languages. The only languages for which you can have two-round proofs are in BPP, which are languages where you can verify that on your own anyway, so you don't need a pruber. On the other end of the spectrum, we have the result of both ICAHAN, which showed that five-round proofs do exist for all of NP. In the middle, we have the question, do three- or four-round proofs exist for all of NP? And CUTs in 2008 showed that if you only look at black box simulation, then actually four-round proofs and thereby also three-round proofs cannot exist for all of NP. They can only exist for languages in CoMA, which still left open the question, well, what about non-black box simulation? And there's a very recent result from last year, Krypto by Kalei et al. That showed that for public coin protocols under certain strong assumptions on obfuscation, namely that sub-exponentially secure IO exists and a special kind of obfuscation called input-hiding obfuscation for multi-bit point functions exist, they could rule out public coin protocols for any constant number of rounds. In the sense that, again, these protocols can only exist for languages in BPP. So they still leave the question, of course, what about non-black box private coin protocols? Those are not ruled out by any of this. And this is where our result comes in. We specifically rule out three-round protocols, even in the private coin and non-black box simulation setting. Specifically, assuming sub-exponentially secure IO, sub-exponentially secure punctual PRFs and exponentially secure input-hiding obfuscation for multi-bit point functions, we show that private coin three-round protocols for three-round zero-knowledge proofs can only exist for languages in BPP. This, of course, leads to the natural question, what about four-rounds? And we do not expect that our techniques can be extended to four-rounds. The reason is that there exists a weaker notion of zero-knowledge called epsilon zero-knowledge, which basically the simulator is allowed to not simulate negligibly close, but it can be possible to distinguish between the simulator output and the real protocols output by a factor of epsilon. And this weaker notion of zero-knowledge, our result extends to it, which means we also rule out epsilon zero-knowledge three-round protocols. However, epsilon zero-knowledge for four-rounds can be instantiated by a protocol due to Witanski's data from multi-collision resistant hash functions. Now, we don't know, as far as I know, that multi-collision resistant hash function can actually be instantiated from standard assumptions. But nevertheless, it's a technical hurdle that would need to be overcome to extend this. So it seems unlikely at the moment. So how do the proofs, both our proof and also the proof of Kaleida, actually work? The basic idea is one of round compression, which means that we take a three-round protocol and we compress it down to a protocol with fewer rounds. Why do we want to do that? Well, the first idea is that ruling out a protocol with fewer rounds should be easier, because it's harder to construct them. So if we can rule out two-round protocols and we can compress a three-round protocol to a two-round protocol, then we've also ruled out a three-round protocol. However, so one might think now that then you're done, because if you can compress the three-round proof and we have ruled out two-round proofs by Goldfish Orrin, then you might think that we're done. The problem is life's not that simple. The fact is that in all of these compressions, if you do round compression, you usually lose in the soundness of the protocol, meaning that what you get there is no longer proof. It's actually an argument. It's no longer sound against unbounded attackers. It's only computationally sound. So we need to take a different path to get a contradiction. And the path that we take is that what we show is that if pi, the original three-round proof, is sound, then pi prime is a sound argument. Pi prime being a sound argument implies that the original protocol, pi, is not zero-knowledge. And thereby the soundness of pi actually implies that pi cannot be zero-knowledge, unless the language is in BPP, that is. Which now leads to the question, how do we actually compress protocols? To compress protocols, we somehow have to remove interaction from it. So how can we do that? Basically, the only way to do that is to somehow move around computation. We could try to move computation from the prover to the verifier to somehow eliminate the first communication step, which seems hard because the prover already uses his witness there, so all we could do the other way around, we could try to move computation from the verifier to the prover. And we actually know compression argument that we can use here. Fiat-Chemir, for example, works like that. If we look at public coin protocols, which is where beta is not chosen in some arbitrary way, but where beta is just randomly chosen coins, then we can apply Fiat-Chemir, which basically means we just drop the first message. Instead of sampling beta, what we do is we sample a hash function from a hash function family. We send this hash function over to the prover, and now the prover is able to just compute the proof. She can compute alpha, then she can sample beta by using the hash function and compute the response gamma, and just send both alpha and gamma over to the verifier. Who can verify this proof? In the random order model, we know that this is this preserved soundness, but crucially, Kalei et al showed that not only is this possible in the random order model, you can actually instantiate this hash function by using obfuscation and punctual PRFs. Namely, what you do is simply you obfuscate punctual PRF. Of course, so this works in the public coin case. In the private coin case, it's more tricky because, well, we do not simply sample random coins here. We do some arbitrary computation. However, if you look at what this actually does, this instantiation of the hash function is, so the verifier, what you would usually do is sample random coins. Well, what does this hash function do? It just samples coins pseudo-randomly. So basically, this is just a de-randomized version of an honest verifier. If we notice that, then we can actually apply the same thing to a private coin protocol. We can just look at a de-randomized circuit of the verifier, which just samples a random tape pseudo-randomly using a hard-coded PRF key, and then just honestly computes whatever the verifier would normally compute. And then just do the same transformation. You drop the first message, you just compute an obfuscated circuit here, you set the circuit over, and now the prover can just compute the PRF. So this is the argument. The question is, of course, how do we prove using this compression argument that the three-round PRF must not be zero-knowledge? So we need to prove two things. If we have our original proof, and we have our argument, we need to prove two things. We need to prove that the soundness of pi prime actually implies that pi is not zero-knowledge, and we need that the compression preserves soundness in the sense of computational soundness. The first part of that is actually relatively simple. It follows the same strategy as gold-type orinitially with some slight modification. If we assume, to what contradiction, that this protocol would be zero-knowledge, then we can look at some arbitrary verifier. So we can specify a verifier, and this verifier, what it does, is just it takes an auxiliary input and interprets it as a circuit, and once it receives the first message, it just applies the circuit to the message and sends this back as its response. And at the end, it just output the full transcript. By the zero-knowledge property, we now know that there exists a simulator that only given this auxiliary input also outputs a transcript that is computationally indistinguishable from a real one. And we can use this to construct a cheating prover against the argument. It's pretty simple, what the prover does is after receiving this obfuscated circuit, it simply runs the simulator on this obfuscated circuit as the auxiliary input. It gets a transcript and it just sends back alpha and gamma. By the zero-knowledge property of the zero-round protocol, what we have is that this is indistinguishable to the verifier. Therefore, for an X in the language, the verifier must accept. However, if the language is not in BPP and, or, well, if this would change, so once we have an X that's not in the language and the behavior of the verifier would change, this would mean that we can distinguish between elements in the language and elements not in the language, meaning that the language must be in BPP. And therefore, if pi prime is actually sound, this means that either the language is in BPP or the three-round protocol is not zero-knowledge. However, all of this, of course, hinges on the assumption that this argument is actually sound. So is it? And how can we prove that? To see how we could prove that, we look at how the prover might be able to cheat. And a cheating prover, the first thing that a cheating prover needs to do is just choose an alpha, a first message on which it wants to cheat. And what we will do is we will define a small subset of alphas that we call bad and we will show that any cheating prover must, with high probability, choose a bad alpha as it's the alpha it's going to cheat on. Then we will prove that the bad alphas actually remain hidden by the obfuscation. The question, of course, is how do we define the bad alphas? Again, in the public coin case, which is basically the proof of Kaleid ala, defining what bad alphas are is relatively easy because if it's public coin, then we can simply say that any alpha that maps to, that using the PRF maps to a beta such that exists any gamma that would be accepted by a verifier is a bad alpha. Now, it's clear that in this case, a cheating prover must use a bad alpha because for the other alphas, there is simply no accepting gamma. However, in the private coin case, this is more complicated because for any beta, there might always be accepting gammas. It's just that what those gammas are might depend on which consistent random tape was actually used to compute this beta. You can imagine a protocol where in addition to my normal random tape, I just simply choose an additional random value and if you sent me that random value back as your gamma, then I accept. That's an accepting gamma, but you of course have no way of finding it because I never reveal anything about this random value. So we need to do a more complicated definition of what bad alphas are, but this gives us a hint of how we can do that because the only way for a prover to cheat would be if they can find right now. First, the security of the IO and the punctual PRF actually hide which random tape was used. So the prover sees, knows what his alpha is and he sees the beta that the obfuscated circuit spits out, but we can show that the security of IO and the punctual PRF actually hide which random tape was used to compute this beta. What that means is that we can define our bad alphas as those that map to a random tape via the PRF such that this random tape leads to a beta such that there exists a gamma that will be accepted by the verify with high probability over all consistent random tapes. Why does it make sense? Because if the prover has no way of knowing which consistent random tape was used to compute beta, then the only chance of cheating is to find an alpha such that for many, for that maps to a beta such that for a large number of random tapes, gamma would be accepted. It means that a cheating prover will output such a bad alpha with very high probability or yeah, with very high probability. We could try to lead this directly to a contradiction with the soundness of the original rerun proof. The problem with that is that at some point we would need to puncture on an unknown point and thereby incur an exponential loss. We can still do that. The problem is that the result would be much weaker than because this means that we would need a three round proof that has a soundness error that's exponentially small. Only then this would still make sense. So instead we follow the same path that Kali had all did and we transfer the loss to a separate primitive. And this separate primitive is what I mentioned before, that's input hiding obfuscation for multi-bit point functions. So what is that that is an obfuscator that gets as input a point function described by the point and the output on that point and it outputs an obfuscated circuit and you want that it's correct meaning on the point it outputs the output. And all other points it just outputs more. And you want that it's secure meaning that for any polynomial time algorithm and this is important that it's a polynomial time algorithm, given the obfuscated circuit, the probability of actually outputting the point of the point function is exponentially small. And this can actually be instantiated at least in the generic group model by a construction of a Carnegie and Duck Doop. So it's not a completely unreasonable assumption and I would argue that it's probably a more reasonable assumption than IO. So yeah, so how do we actually use that to transfer the loss? Just a very short sketch of how this would work. The thing is if we look at a punctured version of the verifier circuit. So where we puncture on some alpha star, if we give this punctured circuit conditioned on alpha star being bad, if we give this punctured circuit to a prover, this cheating prover will output alpha star with a probability that is slightly better than random chance. This allows us to shift this to the input hiding obfuscation by constructing another circuit that basically takes the challenge of an input hiding obfuscator as an input or as a hardcoded input. And we can use IO to switch from the punctured circuit to this circuit because the two are functionally equivalent because this slight bias over random chance is preserved under that. What we get is that we can break the input hiding obfuscator. And thereby we have that this protocol must be sound because the prover cannot exist. And if we combine those two things, we get our main result. Namely, that reruns your knowledge proofs can only exist for languages in BPP in BPP under the assumption state. And with that I'd like to thank you. And if there are any questions, I'd like to... Yeah, we have time for questions. Okay, so I have a question. So just to make sure there's also a positive result here in the sense that you show a general round compression technique. You could take any free message proof, compress it into a two-message argument in a privacy-preserving way. So that in particular, say that I want now to construct two-message witness-hiding arguments, it's enough that I will construct a free-message witness-hiding proof, for example. So the zero-nose or a witness-hiding does not come into play in the compression at all. We only need the soundness of the... Right, so this is a general transformation and privacy-preserving? Yes. Okay, any more questions? So let's hand Niels again.