 Hello and welcome to the presentation of the paper Single to Multi-Theorem Transformations for Non-Interactive Statistical Zeonology. My name is Felix Rohrbach and this is joint work with Marc Fischlin. I will be presenting this paper in a live session on May 13th as part of PKC 2021. So if you have any questions, feel free to ask them there. First, let me shortly remind you about Non-Interactive Zeonology arguments. A Non-Interactive argument consists of a verifier and a prover, where the prover has to convince the verifier whether a statement x is in a specific language l. As this argument is non-interactive, the prover may only send one message to the verifier. Additionally, we need to give both parties access to a random string, the so-called common random string, which cannot be influenced by either party. If such a common random string exists, we say that we are in the common random string model. Now we want this protocol to fulfill three properties, completeness, soundness and zero knowledge. Completeness requires that for a statement x in the language l, the prover generates a proof from x and a witness omega, which is then accepted by the verifier with high probability. Soundness on the other hand, requires that no malicious prover may convince the verifier to accept any statement x not in the language l with considerable probability. Finally, Zeonology means that the verifier should not be able to learn anything besides whether x is in the language l or not. This is modeled by the existence of a simulator without access to the witness omega, which, however, should create a protocol message indistinguishable from the one by the prover. Soundness can be both against an bounded or an unbounded malicious prover. And for Zeonology, the indistinguishability between the simulator and the re-prover can be computational, statistical or even perfect. However, if we want to be able to run arguments for all languages in NP, we can only either have unbounded provers or statistical or perfect Zeonology. In the first case, we speak of non-interactive Zeonology proofs. In the second case, of non-interactive statistical or perfect Zeonology arguments. Our focus is on the second one. As mentioned previously, non-interactive Zeonology arguments require a common random string which must not be influenced by any party. Unfortunately, many constructions of non-interactive Zeonology arguments cannot reuse this common random string or only for a fixed number of arguments. We call these constructions single theorem. In contrast, a multi-theorem non-interactive Zeonology argument can be used for any polynomial number of arguments. For non-interactive Zeonology proofs, that means if we only require computational Zeonology, Feige, Lapidot and Chamier show the transformation which turns any single-theorem non-interactive Zeonology proof into a multi-theorem variant, assuming only the existence of a pseudo-random generator. There exists a folklore transformation based on this FLS transformation for non-interactive statistical Zeonology arguments, which, however, does not work for common random strings and requires a string which is the image of a pseudo-random generator instead. Those structured strings are also called common reference strings. However, arguably, it is a lot harder to generate such structured strings which still can't be influenced by any party. Therefore, a natural question is whether such a transformation can be given in the common random string model and under which assumptions. Our contributions in this paper are twofold. First, we analyze, in a more fine-grained way, different soundness properties for non-interactive statistical Zeonology arguments. Second, we give two transformations from single-theorem to multi-theorem Zeonology using common random strings, one assuming the existence of one-way permutations and the other assuming the hardness of the learning-with-errors problem. I will start with soundness. As mentioned in the introduction, soundness refers to the probability that a malicious prover is successful in convincing the verifier to accept a statement X not in the language L. Commonly, there are two types of soundness, depending on when the malicious prover decides which statement X it wants to use. For non-adaptive soundness, the prover has to decide on the statement X before seeing the common random string. For adaptive soundness, however, the prover decides on X after seeing the common random string. However, there is another dimension of soundness which often goes unnoticed, namely, in which way we measure the success of the malicious prover. Clearly, the prover should not be able to make the verifier accept the statement X not in the language L. However, there are two ways to capture the non-membership requirement. One possibility is to disallow the malicious prover from outputting a statement X in the language. The other possibility is to allow the prover to choose a statement X in the language, but it will lose the game in this case, independent of whether the verifier accepts or not. Similar to definitions for NCCA by Villare et al., we call the first case exclusive soundness, as we exclude all malicious provers that output X in L, and we call the second case penalizing soundness, as penalize the malicious prover for outputting X in L. Note that in a concurrent work, Arta and Villare made a similar distinction and incidentally came up with the same name for both these variants. The difference between exclusive and penalizing soundness may appear to be insignificant at first. Indeed, for non-interactive proofs, that is, when soundness holds against unbounded adversaries, all different soundness properties presented here are equivalent. However, for non-interactive arguments, this is not the case as far as we know. As a malicious but bounded prover, might itself not know whether its statement X is in the language L or not. In total, we end up with five definitions. All combinations of adaptive and non-adaptive, as well as exclusive and penalizing, plus a non-uniform variant that only exists for non-adaptive soundness. Adaptive soundness implies non-adaptive soundness, and penalizing soundness always implies exclusive soundness. Therefore, adaptive penalizing soundness is the strongest definition. However, pass showed in 2016 that adaptive penalizing soundness cannot be reached in a black box way from hard primitives. Further, we show that for non-uniform provers, all non-adaptive notions are equivalent. This leaves adaptive exclusive soundness as probably the strongest soundness property achievable in a black box way. As adaptive exclusive implies a slightly weaker notion of adaptive culpable soundness, which was introduced by Krood Ostrovsky and Sahai in 2012, and adaptive culpable soundness was shown to suffice for many applications, we think that reaching adaptive exclusive soundness is indeed meaningful. As our second contribution, we give two constructions of single-to-multi-theorem transformations for non-interactive statistical zeonology arguments in the common random string model that indeed retain adaptive exclusive soundness. Our first transformation requires the existence of one-way permutations. Further, we can extend this transformation to even retain perfect zeonology. However, we require the simulator to run in expected polynomial time for this. Our second transformation is based on the learning-with-errors assumption. This construction fits in nicely with the recent construction of single-theorem statistical zeonology arguments based on plain learning-with-errors by Peichert and Chiarjan. I'm showing here a comparison of our work to a selection of other multi-theorem constructions or transformations. Our transformations are the first to provide a form of adaptive soundness from standard cryptographic assumptions. Further, together with the recent construction by Libert et al., our transformations are one of the few that work with common random strings, as opposed to structured common reference strings, and still achieve statistical zeonology. The general idea for both transformations is a dual version of the transformation based on pseudo-random generators by Feige, Lavidou and Chamier. In their construction, they extend the common random string by an auxiliary string, the length of the output of a pseudo-random generator. Now, instead of just proving that the statement x is in the language l, the prover is supposed to prove that either x is in l or the auxiliary part of the common random string is an output of the pseudo-random generator. For an honestly generated common random string, the probability of the auxiliary part being the output of the pseudo-random generator is small, therefore, soundness still holds. However, the simulation can now generate a common random string that is indeed in the domain of the pseudo-random generator and uses this to convince the verifier without the need to convince the verifier that x is in l. Feige, Lavidou and Chamier showed that they can use witness indistinguishability to generate a transcript indistinguishable from a true execution. This, together with the hardness of the pseudo-random generator, ensures that the distinguisher for zeonology cannot know whether a transcript is from the simulator or a true execution. Further, they show that this holds even if the common random string is reused for any polynomial number of arguments. Intuitively, this is the case because they have not revealed anything about the artificial common random string generated by the simulator. However, this transformation does not work for statistical zeonology, as the output of pseudo-random generator is not statistically close to uniform randomness, and the distinguisher would therefore notice that the common random string must have been created by the simulator with overwhelming probability. In our transformation, we specify instead that the auxiliary string in the output is not pseudo-random. For soundness, we let the malicious prover run on a pseudo-random common random string using that malicious prover is bounded and is therefore unable to distinguish these cases. I will now focus on our second construction. The latter space construction uses a dual mode commitment scheme based on learning with errors by Gorbunov et al. The commitment scheme lets the party commit to a value without revealing the value itself at first. Later in the opening phase, the party may reveal to which value it committed. This is captured in two properties, the hiding property, which guarantees that the commitment reveals no information about the committed value, and the binding property, which guarantees that the committing party cannot claim to have committed to some other value afterwards. Both properties hiding and binding can be both against bounded adversaries. However, it's a well-known fact that for each commitment scheme, only one of these properties can hold against an unbounded adversary. Either a commitment scheme is statistically hiding and computationally binding, or it is computationally hiding but perfectly binding. Now, a dual mode commitment scheme is a commitment scheme that can be both perfectly binding and statistically hiding but not at the same time. It contains a public key that can be generated in a hiding or binding mode, making the commitments using the key either statistically hiding or perfectly binding. Further, a public key created in the hiding mode is computationally indistinguishable from a public key generated in the binding mode. I will now explain how we use that dual mode commitment scheme to build a single to multi-theorem transformation for non-interactive statistical knowledge arguments. For the auxiliary string in the common random string, we add enough random bits for a public key pk of the dual mode commitment scheme as well as for commitment c. By the design of our dual mode commitment scheme, a random public key is a public key for the hiding mode with overwhelming probability. Further, we modify the language such that either statement x must be in the language l or the commitment in the common random string is a commitment to one. Now, a random string is not a commitment to one with high probability. However, similar to the construction by Feige Labidou and Shamir, the simulator can set the commitment to be a commitment to one using the statistically hiding property of the dual mode commitment scheme. Note that here we need that the public key for the hiding mode of the dual mode commitment is statistically close to uniform randomness, because now even an unbounded distinguisher cannot distinguish between the random public key in the true common random string and the public key generated by the simulator. Because we know that the scheme we transform is single-theorem statistical knowledge, it is already multi-theorem statistical witness indistinguishable. Now, this together with the fact that the common random string generated by the simulator is statistically close to uniform randomness, we get that indeed the transformed scheme is multi-theorem statistical knowledge. Let me now show you why this construction is adaptive exclusively sound, if the underlying single-theorem construction is already adaptive exclusively sound. To prove soundness, we change the common random string given to the malicious prover. First, we regenerate the public key pk for the dual mode commitment scheme in hiding mode and compute a commitment to zero under the new public key. As both are statistically close to uniform randomness, the malicious prover has only negligible advantage of detecting the new common random string. Next, we replace the public key of the commitment scheme by a binding mode public key and recompute the commitment. If this would change the success probability by more than a negligible amount, we would have a distinguisher against the public key modes of the dual mode commitment scheme which in turn would break learning with errors. Therefore, the malicious prover can have only lost negligible probabilities by switching the common random string to the modified one. However, due to the perfectly binding of the dual mode commitment scheme, the all part of our modified language is now always false and the language for this specific common random string is therefore identical to the original language. Therefore, a successful malicious prover would also break the underlying scheme which we assume to be sound. All that is now missing is completeness which, however, is not influenced at all by our construction. Therefore, we have now shown that our transformed scheme is indeed Marguerite non-interactive statistical knowledge. To conclude, we have analyzed soundness of non-interactive arguments and argue that adaptive exclusive soundness is probably the most promising soundness variant to achieve. Further, we have given two singled to multi-theorem transformations for statistical zio knowledge that work in the common random string model and retain adaptive exclusive soundness. Thank you very much for your attention. You can find our full paper on e-print and if you have any questions, join our session on May 13th or just write us an email.