 So my name is Cody Freitag. Today, I'll be talking about non-uniformly sound certificates and their applications to concurrence your knowledge. So to motivate this notion of non-uniformly sound certificates, I want to start with the problem of computational delegation. So this is the setting where we have some programmer, Bob, who wants to compute some function. But this function is kind of an expensive polynomial time computation. So he wants to outsource it to the cloud. So what he does, he asks the cloud to compute this machine, m, on some input x. And the cloud computes some output y, which it then returns back to the programmer, as well as its proof pi certifying the correctness of the computation. And the idea here is that the proof should be short and efficiently verifiable. Otherwise, the programmer could have just computed this function on his computer himself and figured out the output. So we want this kind of proof to be as efficient in much less time than to compute the function itself. And you can ask the same question for NP as well. So here you have some non-deterministic computation. And maybe we want to know, does there exist some witness that causes the verification circuit for the language to output 1? And so in the same setting, you want a proof that's kind of much more efficient that says that there exists some witness causing machine to accept. And here the witness can be even as large as the running time of the machine, because each kind of step of the machine, could make some different non-deterministic choice. So to satisfy the efficiency requirements of this protocol, we can use the synced arguments, which were developed by Killian and then later by Brock and Goldreich in 2002, where you have some prover talks to a verifier and a certain number of rounds of interaction to whether or not some statement is in a language. And we require kind of the three standard properties. So first is completeness. So if the statement is in the language, then an efficient prover should be able to convince the verifier. We require a computational notion of soundness. So if the statement's not in the language, then no non-uniform kind of efficient attacker should be able to convince the verifier. And then lastly is the synchness. So the communication in this protocol should be polylogarithmic, so almost independent of the size of the statement and the size of the witness. So what is this? And assuming just collision resistance, Killian and then later Brock and Goldreich in 2002, show that assuming collision resistance, there exists distinct universal arguments for all of NP. This universality means that there's one protocol that works for any NP language. And so what does this look like for this delegation problem? Well now the server and the programmer can interact through a number of rounds and the programmer will be convinced that the statement is correct. But the problem with this setting is if there's another person who comes along and also wants to be convinced of this statement, say she asked the programmer Bob what the result is, Bob's not able to just forward this proof over. She would have to kind of ask the server to redo the computation, maybe redo this interaction with the server. And so ideally what we want is to sync non-interactive arguments where the prover's message to the verifier or the prover's communication to the verifier is just a single message, which we'll call the proof pi, which should convince the verifier still satisfying the same notion of completeness, soundness and succinctness. So in the setting, now the programmer is able to just send a proof over to Alice and she'll be convinced. So the question we ask in this work is can these kind of ideal objects to sync non-interactive arguments actually exist? And the problem is that this notion of computational soundness that we want to achieve actually collapses to statistical soundness, where we're saying if any computationally unbounded prover can find a false proof that convinces a verifier of a statement that's not in the language. So this notion of statistical soundness is a question of does there exist such a false proof? So if there does exist such a false proof and statistical soundness is broken, then a non-uniform adversary can just receive this accepting proof for a false statement hard-coded into its advice. And so because it can receive this cheating proof, this would just break the notion of computational soundness. So the question is, can these statistically sound protocols actually exist? And unfortunately for NP languages, the answer is no, assuming at least the NP can't be decided in a sub-expandential time. And then even for P, we have kind of the same negative results that given standard de-randomization assumptions, this notion is impossible to achieve. So one way around this is to introduce like a trusted setup in the sky, which could be some structured setup with a common reference string or unstructured setup with a common random string, which this notion we usually refer to as snargs. And Macaulay in 1994 showed with his computationally sound proofs that this notion is actually possible in the random oracle model, assuming just an unstructured common random string. And then because this notion is so useful and has so many applications in so many areas, there are many more practical constructions assuming either structured or unstructured setup. But the problem in all of these scenarios is that the setup needs to be trusted in some way, although it could have been adversarily generated in a way that could introduce a trapdoor. So what we really want is some way to get a guarantee without any possible setup. Where recall we want the soundness even against non-uniform attackers, but from what I said before, it seems that we must be out of luck because this is equivalent to statistical soundness which we think is impossible. So another way around this was considered by Chunglin and Pass in 2013 by the name of certificates which are snargs without any setup. But they realized that we can't hope for non-uniform soundness, but at least this can be achieved for uniform soundness, where again this is just the same notion where a statement not in the language, no uniform polynomial time attacker can convince the verifier. And the idea here is that because this attacker is kind of a constant size Turing machine you can think of, it doesn't have the ability to hard code these false proofs even if they exist. And so they observe in this work that Macaulay's CS proofs actually does satisfy this notion in the random oracle model. But there's a reason that non-uniform security has kind of become the de facto standard in our community because it captures kind of targeted pre-processing phase, any unknown future attack, or arbitrary kind of side information you may have before the protocol started. One such example is with rainbow tables where you can kind of spend a long time computing a large pre-computed table to attack specific instances of a hash function. And in the random oracle model we can consider kind of a similar notion of non-uniform security where you have an unbounded pre-processing of the random oracle as long as the resulting output that you get whenever you're kind of performing the attack has bounded size. And so this model is known as the exoteric input random oracle model and was introduced and formalized by Unruh in 2008. So in this work, we're asking the question in this setting where we want to sync non-interactive arguments without any setup, what's the best possible soundness we could hope for against non-uniform attackers? Against non-uniform attackers. And so the first thing to note is that there's kind of this trivial attack which is similar to what I described before. So if your proof length is size U, then you can just hard code S over U, different accepting proofs for false statement directly into your advice. So what we want is to say that no adversary can do any better. So an attacker with some fixed amount of S bits of advice can't find more than a polynomial in S number of accepting proofs for false statements. So this is very similar to a recent notion of keyless multi-collision resistance where they're asking the same question about can you guarantee you can't find more collisions than the size of your advice? And also a similar soundness notion was used in this work by Batonsky and Lin in 2018 for one message of your knowledge. So just to recap, this notion of non-uniformly sound certificates, we require completeness and succinctness and also this best possible soundness notion against non-uniform attackers. And before moving on, I wanna kind of look at this problem through another lens. So the question of can we compress NP witnesses? So completeness says that valid statements should still be verified even with a very short proof, only polylogarithmic in the size of the witness. And then this best possible soundness notion translates into this kind of question of computational comagor of complexity of large sets of cheating proofs. In other words, if a large set of cheating proofs could be compressed in a way that could be easily decompressed, easily and efficiently decompressed, then a non-uniform adversary could receive kind of this compressed version of a large set of cheating proofs in its advice and then efficiently decompress this to break this best possible soundness. So really kind of these notion of non-uniformly sound certificates can be seen through this kind of complexity theoretic language of can we compress NP witnesses in a meaningful way. So what we achieve in this work, well first we observe that the known candidates of snargs without setup, for example, Macaulay CS proofs do not achieve best possible soundness. In the case of Macaulay CS proofs in the auxiliary input random oracle, where the advice is allowed to depend on the random oracle. Specifically, we show that with just a small amount of non-uniform advice, this can be used to generate an exponential number of false proofs. However, all is not lost because we show in the auxiliary input random oracle model, there actually do exist non-uniformly sound certificates for NP. And we show that these are actually useful. So as our main application, we show how to construct constant round concurrence zero knowledge from these primitives. So for the next bit, I will talk about this application to zero knowledge. So let's just recall real quick, in zero knowledge you have two parties that are interacting, a prover and a verifier, and the prover should be able to convince the verifier that the statement is in the language while revealing nothing else. So even if the verifier is adversarial and is trying to extract some information from the prover, it should fail. In the state of the fairs, zero knowledge has kind of been well known at least from a theoretical perspective for a long time because just from one way functions, we've known how to construct constant round zero knowledge arguments for all of NP for almost 30 years now. However, the situation is a bit more messy in this concurrent setting where maybe you have many verifiers interacting with many provers in this distributed, complicated fashion. We want to guarantee still that even if all of the verifiers are colluding, trying to extract information from possibly many independent provers, that they'll still not be able to learn anything extra. And kind of the holy grail problem in this area has been to construct constant round zero knowledge protocols with this additional concurrent security. And up until very recently, not much has been known. And then in 2015, there were some recent constructions based on obfuscation, specifically indistinguishability obfuscation. And so in this work, we show that given non-informally sound certificates for P as well as collision resistance, we construct constant round concurrent zero knowledge arguments for all of NP. So this is kind of a different flavor of assumption from the obfuscation type assumptions that were in 2015. Particularly, this is kind of a random oracle model type assumption, at least given our current construction of non-informally sound certificates. And so just to contrast these two assumptions, obfuscation kind of is still being studied is a relatively new primitive. Current candidates are based on kind of not well understood, not completely well understood assumptions. And random oracle model kind of gives a more heuristically sound argument that can maybe be more practically instantiated. And also I want to emphasize this public coin aspect of our protocol, which wasn't known before under any assumptions, even in the random oracle model. And public coin protocols have many benefits whenever the verifier has no private state and is just sending random messages to the prover. It has applications and leakage resilience and public verifiability and other kind of transformations to other protocols. And kind of the last thing I'll say about our protocol is that at a high level, it follows from this work in 2013 from Chung Lin and Pass where they achieve kind of a uniform notion of soundness because they're using kind of the certificates that I talked about before, which only had kind of the capability of achieving uniform soundness. And we show that with this kind of stronger notion of non-uniformly sound certificates, we can improve this to get kind of the full notion of soundness for a concurrent zero knowledge protocol. So for the rest of the talk, I want to focus on our construction of non-uniformly sound certificates and give kind of a high level idea of how it works and what we kind of have to go through to construct these. So first I want to talk about Macaulay CS proofs in general and why they don't satisfy best possible soundness where you can think of Macaulay CS proofs as applying kind of a Fiat-Chemir type transformation to Killian's async argument. And then I'll show how to modify this Fiat-Chemir transformation so that when you apply it to Killian's protocol, it will actually satisfy best possible soundness. So just to recap, I want to quickly go over Killian's async argument. So this is where a verifier first sends a description of a hash function. The prover computes a PCP proof for the statement and then Merkle Tree computes the digest using a Merkle Tree with this hash function H, sends the digest over, the verifier responds with a random challenge. Using this random challenge, the prover opens up the Merkle Tree in certain locations, sending them over to the authentication pass over to the verifier who verifies these openings and as well as the PCP proof. And the idea is to get a certificate system out of this, you can just apply Fiat-Chemir, which is exactly what Macaulay C.S. proofs look like. So here, first you use the random oracle to compute this Merkle Tree and then you also use the random oracle to generate the random challenge yourself. Then the verifier just checks as it would in the interactive case, as well as checking the correctness of the random challenge. So how do we actually cheat in this protocol whenever you have advice that may be based on the random oracle? So the first step, just as we saw in general, you can just hard code an accepting proof for any invalid statement you want. So let's start with that. Next, we can look at the Merkle Tree that's used and we can actually hard code collisions at each leaf in the Merkle Tree that leads to the same digest A. And then we can mix and match these collisions to get an exponential number of openings. So what this looks like is we have our PCP proof at the leaves of our Merkle Tree. If we open up kind of the first block and end different options and then with a few more bits we can open up the second block and end up different options and then we can mix and match them and now get a quadratic number of options. And if we continue this throughout the whole PCP proof, we can get an exponential number of ways to open up to the same statement A which we can then use to cheat on many different statements. So what's a first step to get around this attack? Well, we observe that if this statement is short enough we can actually just index the random oracle by this statement. So now if the random oracles index by this statement at a high level this forces the adversary to kind of use some knowledge for each new statement it cheats on. So we formalize this idea with a compression argument and show that this resulting protocol actually satisfies this best possible notion of sound as I put forth before. And the details are a bit messy with this compression argument but I wanna just mention that kind of the reason we were able to do this is because we indexed the random oracle by the statement and now whenever the adversary queries the random oracle we can kind of use the fact that it must have known the statement at hand. So how do we extend this to deal with long statements? So as normal one idea is to just index the random oracle with a short commitment to the statement. And we need a way to guarantee that the short commitment can't be opened in too many ways because then if the adversary can just cheat on the short commitment and open it in different ways then it could find cheating proofs for many different statements. And for example, if we just use a merkle tree for the short commitment then we're back to square one because we have the same mix and match attack from before. So our solution is to first apply a suitable encoding and then apply the merkle tree where this encoding we use is what's called a list recoverable code similar to work by Komagrotsky et al. related to multi-clision resistance in 2018. And so our commitment looks like applying a list recoverable code to the statement and then a merkle tree. And so what this code guarantees is if these are kind of the high level blocks of the encoding in the leafs of the merkle tree. If the adversary can only open up each of these blocks in a bounded number of ways, say L different ways then it guarantees that the number of valid encodings is only polynomial in this list size L. And furthermore we can bound this list size by arguments about multi-clision resistance because eventually all of the blocks the encoding will be merkle tree will hash down to a single digest A. So as I said before, the compression argument required us actually to know what the statement was whenever the adversary queries the random oracle. And so this introduces a new problem because now we're only querying where the random oracle is indexed by the commitment. And so the problem is we may not be able to actually extract out the statements from these queries. So what we do now is we need a way to tie the commitment to the statement X and we do this kind of with another merkle tree where we're using as the hash function in this merkle tree the random oracle which is indexed by this commitment. And at a very high level this forces by forcing a cheating prover to do this we're able to either extract out the statements by tying them together or in the compression argument we'll be able to compress at a different point. So that was all quick but I just wanna give kind of a high level idea of what this transformation we propose looks like. So first there's kind of a commitment phase where both the prover and the verifier are able to commit to the statement X. The first part with this merkle tree applied to a list recoverable encoding of X and then the second tree, the second part by applying a merkle tree to X using this previous commitment. And then in the proof phase you can think we're just doing Macaulay CS proofs or whatever protocol you want we're replacing the random oracle and indexing it with these two kind of commitments from the commitment phase. And at a high level what we're showing in this paper is that this modified Fiat-Chemir transformation when applied to Kilian's computationally sound argument satisfies best possible soundness. And in general this isn't going to work for all arguments but in the paper we actually show that if you just restrict to three round proofs you can kind of use the statistical properties of the proofs to show that this satisfies best possible soundness. And in fact it's the statistical properties of the underlying PCP and Macaulay CS proofs is what allows us to kind of prove this result. So to conclude I just wanna overview that we gave kind of this new notion of non-uniformly sound certificates. So what's the best possible thing we could achieve against realistic attackers which can be seen kind of this language of proof or comagor of complexity. We show that they're actually useful so we can or they exist we can construct them and furthermore they're also useful and can use to construct concurrent constant round concurrent zero knowledge protocols. So with that thank you for listening. Thank you Cody, any questions? So I was wondering whether you said this already at the end but why don't the techniques or do you think it's impossible to apply these techniques to say logarithmic on protocols like IOPs or other protocols where we use a Fiat-Chemier transform to make them non-injective. So you're saying. Like apply the same adapted Fiat-Chemier heuristic to make them like get this notion of soundness is adaptive. So there's no reason say if you had kind of a stronger soundness per round there's no reason necessarily why this wouldn't work. Okay, but you haven't looked at it. For arguments in general where you maybe don't have this kind of statistical properties we don't have any evidence that this kind of transformation would work. But for proofs it probably would. I see, okay, thank you. Yes. Thanks, great work. Isn't that concurrent zero-knowledge in the random oracle trivial? Is it an application? Concurrent zero-knowledge in the random oracle isn't the model, isn't it trivial? It's an application. Just Nizek. So in Nizek you have some, oh. I don't believe. The rest is great but just this application doesn't make sense. Yeah, I'm not sure. We can maybe talk about it later offline but. All right, let's send Cody again. Thank you. Thank you.