 Hello, my name is Tom Rathma and I will be presenting our paper, A Compress Sigma Protocol Theory for Lettuces. This is joint work with my co-authors, Ronald Kramer and Lisa Kohl. In this work, we consider the following zero-mallage scenario. A proof is committed to a secret input value X, and it wishes to convince a verifier that it knows an opening to this commitment, and that this opening satisfies an arbitrary public constraint. Moreover, it wishes to do this without releasing any additional information besides the veracity of the claim. In other words, we want to prove it to be able to do this in a zero-mallage manner. Our goal is to construct protocols for precisely this task, and our protocols should have low communication complexity. In particular, we aim for a polylogarithmic communication complexity. Moreover, we want the underlying hardness assumptions to be lattice-based. Finally, we aim for a commit and prove functionality, where the proofer first commits to the secret input value, and at some later point in time decides to prove statements about the committed input value. The starting point of our work is Compress Sigma Protocol Theory, as introduced at Crypto 2020. The high-level paradigm of this work is to solve linear instances first, and then linearize nonlinearities. So basically, first construct a protocol for proving linear relations, so for proving that secret input factors satisfy a linear relation, and then construct a technique to linearize nonlinearities. The starting point of this theory is a natural Sigma protocol for linear constraints. So on the right-hand side, we see such a Sigma protocol. The Sigma protocol allows a proofer to prove that it knows an opening to a commitment. Here the commitment is denoted by square brackets, such that the opening of this commitment satisfies a linear constraint, here captured by this linear form L. So Sigma protocol theory is a well-established widely used basis for zero-knowledge proofs. In particular, we know how to construct general constrained zero-knowledge protocols based on Sigma protocols. However, these protocols have a linear communication complexity, and this Sigma protocol we can already see where this linear communication complexity comes from. The final response set of this Sigma protocol is namely of exactly the same size as the input X, the secret input X. So this Sigma protocol has a communication complexity that is linear in the size of the input X. In compressed Sigma protocol theory, the communication complexity of this natural Sigma protocol is compressed from linear down to logarithmic by applying an adaptation of a bulletproof proof of knowledge. So bulletproof proof of knowledge is a recursive proof of knowledge for a certain quadratic relation. So this proof of knowledge is adapted in compressed Sigma protocol theory and repurposed as a black box compression mechanism for the Sigma protocol for proving linear relations. The result in compressed Sigma protocol still only allows a proofer to prove that the committed factor satisfies a linear constraint. To be able to handle non-linearities, so to prove that the committed factor satisfies some non-linear constraint, an arithmetic secret sharing scheme is used to linearize these non-linearities. After the linearization, the compressed Sigma protocol for linear constraints can be used in a black box manner. Compressed Sigma protocol has been instantiated for a variety of hardness assumptions. So we can achieve logarithmic communication by instantiating this theory from discrete logarithms from our RSA assumption, but we can also achieve a constant communication complexity by instantiating this theory from the knowledge of experiment assumption. Also in a follow-up work, this theory has been instantiated for pairing-based language, which is the bilinear circuit model. So a very natural question that arises is whether this theory also works for lattice-based hardness assumptions. We answer this natural question in the affirmative by constructing a lattice-based instantiation of compressed Sigma protocol theory. On a very high level, we do this by replacing, for example, the PDIS and factor-commitment scheme by a homomorphic factor-commitment scheme that is based on a lattice assumption. And in principle, this should lead us to a circuit share knowledge protocol based on lattice assumptions with polylog-rhythmic communication complexity. However, when going through the motions, we encounter a number of challenges. So for example, we have to deal with soundness slack, approximation factors, rejection of the soundness slack, non-aboard special-onus verifier zero-knowledge, but these challenges are not unique to compressed Sigma protocol theory and are also encountered when instantiating standard Sigma protocols from lattice assumptions. The difference, however, is that in contrast to standard Sigma protocols, compressed Sigma protocols have logarithmically many rounds. And these aspects, they propagate through these rounds. So for example, the soundness slack accumulates through the logarithmically many rounds of a compressed Sigma protocol. This requires us to be very careful in the analysis and instantiation of the protocols. And our contribution in this regards is that we develop an abstract framework. And this abstract framework captures various design choices that allows us to instantiate or this abstract framework can be instantiated from a variety of lattice assumptions and lattice instantiations. In contrast, many other works are tailored to very specific lattice instantiations. The second challenge that we encounter is the extractor analysis. So lattice instantiations of Sigma protocols and compressed Sigma protocols typically have much smaller challenge sets and this leads to larger knowledge errors. And our contribution in this regards is the first tight extractor analysis for protocols of this specific form, for multi-round protocols of this specific form. Not only is this extractor analysis applicable to lattice instantiations, but it also leads to better parameters for non-lattice instantiations. So for a similar reason, so because we have these larger knowledge errors, even with our tight extractor analysis, we need to repeat the protocol in order to reduce the knowledge error. And we have to do this by using a parallel repetition. So our contribution is that we have a novel parallel repetition theorem for arbitrary proofs of knowledge that shows that parallel repetition indeed reduces the knowledge error. And finally, we have to adapt the linearization strategy. So compressed Sigma protocols show how to linearize nonlinear instances using an arithmetic secret sharing scheme. So when we instantiate this theory from a lattice assumption, we have to define the secret sharing scheme over a ring instead of a field. This requires some adaptation and again a careful analysis. A number of prior works on sublinear lattice-based circuit sharing knowledge protocols have appeared in literature. The first circuit sharing knowledge protocol with lattice assumptions and a sublinear communication complexity appeared in 2018. However, and their communication complexity is not poly logarithmic and therefore does not achieve our goal. Last year in 2020, a lattice-based instantiation of bullet proofs was presented. However, this protocol is restricted to proving knowledge of an SIS pre-image. This protocol is not zero knowledge and it is tailored to a specific lattice instantiation, particularly to power of two cyclotomic number fields. We also see a number of related works presented at this conference, Crypto 2021. The Boutel NL presents an alternative abstract framework for efficient circuit sharing knowledge protocols based on sum-check arguments. Given our extractor analysis, their protocols achieve similar communication complexity as ours. Moreover, Albrecht and Leib present an awful analysis of so-called subtractive subsets, resulting in upper and lower bounds for certain lattice-based circuit sharing knowledge protocols. They also present an extractor analysis. However, unlike ours, their extractor analysis is non-tight. For the remainder of this presentation, we will focus on the technical details of our extractor analysis. For the other topics, we refer to our paper. So our extractor analysis is not only applicable to compressed sigma protocols, but to certain multi-round protocols in general. So these protocols, in principle, take this form. So as input to the protocol, we have a public statement X. So prover and verifier both know this public statement X. And the prover claims to know a secret witness W associated to this witness X. And the prover and the verifier in this protocol, they interact. And the prover sends different messages. The verifier sends challenges to the prover. And in the end of the protocol, so after these two mu plus one rounds, the verifier decides whether to accept or reject the prover's claim of knowing a witness W. So in our extractor analysis, we basically aim to construct a knowledge extractor. A knowledge extractor is an algorithm. And this algorithm is given an input statement X and a reliable access to some prover P. And the goal of this knowledge extractor is to compute a witness W for statement X. And we basically say that a protocol is knowledge sound if there exists an extractor with certain properties. So for example, this extractor should be efficient. So informally, this basically means that if there exists such an extractor, then a prover can only convince the verifier if it knows a witness. Because if a prover can convince a verifier that it knows a witness, it can also run the extractor and compute the witness. So the knowledge extractor should have certain properties. So let us now formalize these properties. And before we do that, we need to introduce some notation. Absolutely on X will denote the success probability of the prover on public input X. So this is the success probability of the prover to which the extractor is given a rewindable black box access. Moreover, we let kappa X be the knowledge error of the protocol. Intuitively, a protocol has knowledge error kappa X if a dishonest prover succeeds in convincing a verifier with probability at most kappa X. So given this notation, we have the following standard definition of knowledge sounds, which requires a knowledge extractor to have expected runtime poly X divided by epsilon X minus kappa X. However, we also have the following alternative definition for knowledge sounds. And this alternative definition turns out to be equivalent to the previous one. However, in some scenarios, this alternative definition is more convenient to work with. For this reason, we introduce and we will actually be using second definition to prove that our protocols are knowledge sound. And this second definition requires an extractor to have expected polynomial runtime. So note that the extractor of the first definition does not necessarily have an expected polynomial runtime, because epsilon minus kappa X might be negligible. However, in the second definition, the extractor has a success probability epsilon X minus kappa X divided by some polynomial in X. So basically, there's a trade-off in these two definitions between expected runtime and success probability. So the first one can have a larger, in the first definition, the knowledge extractor can have a larger expected runtime, but it always has success probability one. And in the second definition, the success probability can be a bit smaller, but the runtime is polynomial on expectation. So to prove that the protocol is knowledge sound, we have to show that there exists a knowledge extractor with the above mentioned properties. This can be a challenging task. To this end, we also have another notion of soundness that is much easier to handle, namely special soundness. It is typically much easier to prove that the protocol is special sound than proving that it is knowledge sound. So what is special soundness? So let us consider a three-move protocol where the verifier sends only one challenge in the second move of the protocol. A such a protocol is called special sound if there exists an algorithm that on input two accepting transcripts, a, c, z, and a, c prime, z prime, outputs a witness, w, for a statement x. So in these transcripts, a is the first message from the prover to the verifier, c is the challenge sent from the verifier to the prover, and z is the final response from the prover to the verifier. And the special soundness definition requires these two transcripts to have coinciding first message. So both transcripts should have the same first message and the challenge sent by the verifier should be different for the second transcript. So it's typically much easier to prove that such an efficient algorithm exists than proving the existence of a knowledge extractor with the desired properties. However, we also have this very generic result that shows that special soundness implies knowledge soundness with knowledge error one over n, where n is the size of the challenge set. So this basically means that using this generic result, we only have to prove that the protocol is special sound, because by this generic result, knowledge soundness will automatically follow. So this is a very convenient result that allows us to deal with special soundness rather than the knowledge soundness directly. This definition of special soundness where we require only two accepting transcripts as a natural generalization to K special soundness where the efficient algorithm requires K different transcripts. And also this result, the special soundness implies knowledge soundness can be generalized to K special soundness. More precisely, it is known that K special soundness implies knowledge soundness with knowledge error K minus one over n. We've only treated the special soundness protocols for the case of three round protocols. However, this property has a natural generalization to multi round protocols. The precise definition of this generalization is not important for the stock, but informally, we say that the protocol is K one up to K mu special sound if the protocol is K I special sound with respect to the I challenge. A bit more precisely, a protocol satisfies this generalized notion of special soundness if there exists an algorithm that given a certain structured set of transcripts extracts a witness. Our result now shows that K one up to K mu special soundness tightly implies knowledge soundness. Prior works have studied special soundness of multi round protocols. For example, an asymptotic analysis has been done. And this asymptotic analysis shows that an exponentially large challenge set implies an negligible knowledge error. However, this result does not give a concrete knowledge error. Moreover, it is not applicable to the lattice setting because in the lattice setting, challenge sets are typically not exponentially large. Also, there exists a concrete analysis of this asymptotic approach. However, the resulting knowledge error is not tied. On a high level, we achieve our result by applying the following techniques. First, we use the alternative with equivalent definition of knowledge soundness. Second, we simplify the extractor for three round protocols. We use sampling with replacement instead of sampling without replacement. Because of these two IDs, our three round extractor can be applied recursively to the multi round case. This is in contrast to prior works. So let us now describe our knowledge extractor. We consider special sound three round protocols. So the only thing the extractor has to do is to extract two different accepting transcripts because the protocol is special sound. And to do this, it proceeds as follows. In the first step, the extractor queries the prover on a random challenge C. So the extractor sends a random challenge to the prover and it sees whether the prover returns an accepting transcript or not. If the prover fails to return an accepting transcript, the extractor aborts. If the prover succeeds, then the extractor keeps rewinding. So keeps sending new challenges C until it has found a second accepting transcript. And every time it rewinds, the extractor fixes the prover's randomness. So the first message of the transcripts stay the same. And also the prover or the extractor samples challenges with replacement. So once it has tried the challenge, it will never try the same challenge again. And it continues in this manner until it has found a second accepting transcript or until it has exhausted all different challenges. So this is already how our extractor works. So this is a very simple algorithm. And we can prove the following two levels. So the first one shows that the expected number of queries of this extractor is at most two. So this basically shows that this extractor has a word runs efficiently. Moreover, the second lemma shows that the success probability of this extractor is at least epsilon minus one over N where N is the size of the challenge set and epsilon is the success probability of this prover. So these two lemmas together, they basically prove that this extractor satisfies the desired properties and that our protocol is knowledge sound. So basically that special soundness implies knowledge soundness with knowledge error one over N. Let us now prove the two lemmas describing the properties of our extractor. To do this, we define the random variable A that indicates the prover's randomness. First we start by proving that our extractor is efficient. Intuitively, we can see that the extractor is efficient by considering the following two cases. So first, if the success probability of the prover is large, then the extractor will quickly find two transcripts. But if the success probability of the prover is small, then with high probability, the extractor will abort after only one query. So in both cases, the extractor runs efficiently. For the formal proof of this lemma, we first condition on the prover's randomness being equal to A. If this is the case, the first step of the extractor succeeds with probability epsilon A, where epsilon A is this conditional success probability. If it does, the extractor starts a negative hypergeometric experiment in which it samples challenges with replacement until it has found the second accepting transcript or until it has exhausted all challenges. So the expected number of trials of this negative hypergeometric experiment is at most one over epsilon A. Together, these two observations show that the expected number of prover queries is at most two, which finishes the proof of this lemma. Let us now show that our extractor succeeds with probability at least epsilon minus one over n, where n is the size of the challenge set. The intuition behind this result is that in the first step, the prover succeeds with probability exact key epsilon. And in the second step, so this negative hypergeometric experiment, the prover or the extractor succeeds if and only if there exists a second accepting challenge because it keeps trying until it has either found an accepting challenge, accepting transcript or until it has exhausted all challenges. For the formal proof of this lemma, we again condition on the prover's randomness being equal to A. If this is the case, we can easily see that the extractor succeeds if the first step is successful and if epsilon A is larger than one over n. Namely, if epsilon A is larger than one over n, it means that there exists a second challenge for which the prover can produce an accepting transcript. Hence, the success probability of the extractor equals this precise expression. And this sum can be split into two parts. And the first part is easily seen to be equal to epsilon. And the second part is easily seen to be at most one over n. Hence, the success probability of this extractor is at least epsilon minus one over n, which completes the proof of this lemma. This far we've described the extractor and its analysis for three round special sound protocols. This extractor or this approach immediately generalizes to K special sound protocols. So still three round protocols, but now K special sound instead of two special sound. Moreover, we can apply this specific extractor recursively to the multi-round scenario. We do have to be a bit careful with the analysis, but it turns out that everything works out and we can obtain the following theorem, showing that K one up to K mu special soundness implies knowledge soundness with this precise knowledge error. Moreover, this expression of this knowledge error is tight. So typically there exists a cheating strategy for K one up to K mu special sound protocols that succeeds with probability kappa. So we cannot hope to obtain a knowledge error smaller than kappa for this generic class of protocols. This brings me to the conclusion of this presentation and a short summary of our contributions. So we developed the first lattice-based circuit serial knowledge protocol with polylogarithmic communication. Moreover, it supports commit and prove functionality and it does not have a trusted setup. Also, we develop a general and tight extractor analysis for special sound protocols. This extractor analysis also has applications and practical benefits for non-lattice instantiations of compressed sigma protocols and other protocols such as bulletproof. Moreover, we develop a novel parallel repetition theorem for proofs of knowledge. Thanks for your attention and we'd be happy to answer all your questions during the live presentation on August 17th.