 The top of this session is Linear Time Zero Knowledge Groups for Aizen Ichisaki's Satisfiability by Jonasan Boutou, Andrea Seruri, San Garasi, Jens Groth, Mohamad Hajiabadi, Sunke Yakobsen, and Jonasan Ryukimoto. Thanks for the introduction. So, Zero Knowledge Groups, as we've seen, are two party protocols where approval tries to convince a verifier that some statement is true, and the verifier that's nothing at all except for the truth of the statement. So, the Zero Knowledge Groups goals have three important properties that we've already introduced. We've got completeness, signeness, and Zero Knowledge Group. Now, our Zero Knowledge Groups goals are going to have three more properties. There'll be a proof of knowledge, meaning that the proof actually has to know a secret witness in order to convince the verifier to accept. There'll be interactive protocols, so the proof and verifier will exchange several messages, and there'll be public-coined protocols, meaning that all of the verified messages are chosen uniformly random from some set. You can measure the efficiency of Zero Knowledge Protocols relative to the size of the statement that the proof wants to prove, and you can measure the proof of computation, the verifier computation, the amount of interaction between the proof and the verifier, the size of the messages that they have to communicate, the cost of setting up the protocol. But in previous work, there aren't any examples of Zero Knowledge Groups with only constant computational overhead for the proofer, so the cost of proving a statement is just directly proportional to the cost of producing the proof. So that's what we produce this possible work. We give those Zero Knowledge Protocols for arithmetic circuits. So an arithmetic circuit is a circuit made up of gates, labeled with addition or multiplication. Every gate has two input wires, and one output wire, and all the wires take values in some field. So in order to compute the circuit, just take two values corresponding to some input wire, apply the operation on the gate, and you get the value for the output wire for that gate. The statement for a Zero Knowledge Group is a description of some arithmetic circuit with a collection of outputs for the arithmetic circuit. And given this statement, it's an energy-complete problem to decide whether valid inputs which satisfy the circuit and give those outputs should exist. So the proofers' witness is going to be a set of valid input wires for the arithmetic circuit. So just skipping that to our results, for a large finite field f and an arithmetic circuit with n gates, we give Zero Knowledge arguments in proof with constant computational overhead for the proofer. We've got O of n field multiplications for the proofer. We've got sublinear verification costs, a digital O of n field multiplications for the verifier, and we've got O of log log n rounds of interaction between the proof and the verifier. So for our Zero Knowledge arguments, we actually get a sublinear proof size, that's O of n the square root n field elements, and security relies only on culture-resistant hash functions computable in linear time. So that's quite impressive, even without considering the Zero Knowledge requirement. These are the first interactive arguments with constant computational overhead for the proofer and a succinct or sublinear proof size. When considering Zero Knowledge proofs with statistical sameness, we get a linear communication cost O of n field elements, and then security relies on linear time computable one-way functions. We construct our protocols by starting with our arithmetic circuits, converting these into systems of matrix equations, which we then convert into collections of polynomials. Next, we give a proof as Zero Knowledge protocol in an idealized model called the ideal linear commitment model. Our next step is to construct efficient linear time computable commitments, and finally we use our commitment scheme to convert our ideal protocol into real Zero Knowledge proofs and arguments. So the proofer begins by arranging all of the wire values for the arithmetic circuit into six matrices, depending on whether the wire value is a left input, a right input, or an output of a particular gate, and whether the gate is an addition gate or a multiplication gate. In order to verify that all the multiplication gates in the circuit are computed correctly, we have to check that the entry product of the first two matrices in the top row is equal to the third matrix, and in order to check that all of the addition gates in the circuit were correctly computed, we have to verify that some of the two matrices in the bottom row is equal to the third matrix. Now, there are also some output wires from gates which feed into the inputs of other gates, and some white holes which are duplicated. So we also have to check that certain values in the matrices are equal. We did this by showing that if we swap certain pairs of values in the matrices, then the matrices that we end up with are exactly the same, and this reduces doing computations using a public home mutation matrix. So that's a small example of what we would do for this small circuit. For larger circuits, we just use larger matrices, and in order to get optimal communication efficiency in our protocol, we choose matrices of dimensions roughly a square root of n by a square root of n, where n is the number of gates in the arithmetic circuit. So previous protocols work by having the prover commit to collections of row vectors using some kind of homomorphic commitment scheme. The verifier then sends a random challenge x to the prover. The prover opens some linear combination of these committed row vectors, and then the verifier checks that these openings are correct. If you have a homomorphic commitment scheme, as in previous works, this is very easy. You can use the homomorphic commitment scheme to compute a commitment to the linear combination which can only be checked. The verifier uses these linear combinations to compute various polynomials with circuit satisfiability embedded into the coefficients, and that's how the verifier checks that the circuit is satisfiable. So we're going to abstract away some of the properties of this protocol using this model, an ideal linear commitment model. So in this model, we provide the proven verifier with an additional functionality, the ILC, which allows the prover to commit to row vectors, and the verifier to query linear combinations of these row vectors in a trusted manner. There are some parts of the protocol, for example, the computations with this public permutation matrix, where the verifier would have to compute some very complicated expressions in publicly known values. And if we asked the verifier to do this by themselves, as in previous works, this would be to linear or even super linear computational costs for the verifier. So instead of doing this, we outsource as much of the work as possible to the prover. So we ask the prover to commit to various public matrices, and then the verifier simply requests linear combinations of the rows of these matrices in order to get the expressions that they want rather than computing by themselves. And this leads to sub-linear verification times. In order to construct our commitment systems, our first ingredient is linear error correcting code. Now, in order to achieve the efficiency results that we want with constant computational overhead, we need linear error correcting code, which is encodable in linear time. And to get good security for arguments, we need an error correcting code with a linear minimum distance. One example of such a code was given by Druk and Ishai, the satisfisible for all of our requirements. Actually, if we were to use a different linear error correcting code, which wasn't linear time encodable, it would still work. We would still get a secure construction. We just wouldn't be able to get this constant computational overhead for the prover. We don't use the codes exactly as given by Druk and Ishai. We actually had some randomness and produced a randomising coding scheme in order to get the zero-order property. We will also need a collision-resistant hash function computable in linear time if we want to get Heimann commitments. And in this case, we use the construction of Heimann commitments given by Apple Bernadol. On the other hand, if we want to get perfectly binding commitments, then we use a construction based on linear time computable one-way functions given by Ishai. And we can achieve both flavors of commitments. In order to actually commit to the y-values in the circuit, the prover starts off with this matrix of y-values that we saw before, applies the error correcting code to every row of the matrix separately, and then applies one of the commitment schemes that we just saw to every column of the new encoded matrix. So that's one commitment for every column in the matrix on the right-hand side. Later, when the prover wants to open these commitments to different linear combinations, the prover computes the linear combination and sends it to the verifier. The verifier then checks this linear combination by applying the error correcting code to the value sent by the prover, and then using openings of columns of commitments sent earlier to perform spot checks on this linear combination. And if our error correcting code has a very quiet minimum distance, then the verifier is very likely to catch a cheating prover. Even when we compile our ideal protocol into a real protocol using this method, we still have sublinear verification costs. In fact, the linear costs are all with the prover in computing these linear combinations, and the verifier's costs are encoding and performing spot checks, which is still sublinear costs. So we can use this technique using the HINING commitments of our whole band at all to get zero-knowledge arguments with perfect completeness, computational sameness, and statistical special honest verifier, zero-knowledge. Or we can use the perfectly binding commitments of this shire at all to get zero-knowledge proofs with perfect completeness, statistical sameness, and computational special honest verifier, zero-knowledge. Here's how the protocol actually looks between a net prover and a verifier. The prover starts off by committing to a matrix of all of these wire volumes. This is roughly a square root n row vectors, and they send all of these commitments to the verifier. The verifier sends back a random challenge, and the prover uses this random challenge to compress the number of row vectors into a new collection of square root n over two row vectors, using techniques similar to those in previous work, like our growth 2009. The prover and verifier actually repeat this process for log log n rounds with log log n random challenges, and the prover uses the same compression technique to compress the number of row vectors from a square root n to a square root n of a log n. And the reason for this impression is if we tried to give arguments directly with square root n row vectors, we would get some super linear computational costs for the prover. If we apply already known techniques to square root n over log n vectors, then we can get linear computational costs for the prover using fewer vectors. So at this point, after a lot of compression, the prover and verifier engage in arguments similar to those in the growth 2009 in order to verify that the committed values are part of matrices which satisfy its product condition, its addition condition, and the correct permutation condition too. The prover sends a constant number of linear combinations to open all of these committed values to the verifier. Finally, the verifier randomly selects a set of indices which tell the prover which columns that they should open, the prover opens commitments corresponding to all of the columns specified in this random set i, send this to the verifier, and finally the verifier uses all of these commitments to do the spot checks on the columns of all of the code words. Here's a comparison of our zero-enlighted arguments relative to previous zero-enlighted arguments. At the top, we have some classic work by Crane and Dunbar, which is based on the discrete logarithm assumption. Since this protocol relies on partisan commitments and exponentiations in finite groups, straight away we have this computational overhead of a security parameter in the previous competition. The same applies to previous works like Broker 9 and Zoho 9, and in fact, Snarks also use exponentiation in finite groups so they have the same computational overhead of the security parameter. Further down, we've got PCP-based constructions of Bensass mental and some concurrent work featuring at CCS this year called the Gero. These also rely on collision-resistant hash functions, but importantly they both fail to achieve constant computational overhead for the prover and neither of them actually achieve sub-linear verification costs like our workers. So to sum up, we have the first zero-enlighted crucible arguments with constant computational overhead to the prover and sub-linear verification time. Our arguments have sub-linear communication costs and security solely based on either collision-resistant hash functions for our arguments or one-way functions for our zero-enlighted crucible. Thanks very much. This is some of the comments. Thanks for the great talk. Just a clarification about the sub-linear part in the verification. What is the trade-off you're setting into something? Is it communication? The way you kind of let it open, if you set it to square root, do you get square root communication? If you set it to third root, do you get something larger than communication? Is it a trade-off like that? I'm sorry, could you give me a question? Just by saying sub-linear verification, can you specify that exactly how is that two-level? The sub-linear part? Is it a trade-off between setting it to something and paying it somewhere else? Oh, okay. The sub-linear verification cost is not chewable. So, let's go back. The sub-linear verification cost comes from the verifier actually applying the error-correcting code to many different vectors. So, since the error-correcting code actually has linear time encoding cost, the verification cost isn't chewable. I've got a question on the coin. Can I have a question? You said that your result is in Nintendo. Do you have concurrent work that you get? Could you explain the kind of the experiment? Okay, so Ligero was concurrent work, which was presented at CCS this year. Ligero actually uses some very similar techniques using error-correcting codes and collision-resistant hash functions. But there are two big differences with Ligero competitor work. So, firstly, they don't achieve this constant computational overhead. That's because the error-correcting codes that they use are read-solving codes, which are more complicated to encode. So, you could ask the question, why not use these linear-time encodable codes in their work? Could you not get the same results as us? Another important difference with this Ligero protocol is they use the multiplicative property of read-solving codes. We don't actually use any special property like that of our error-correcting codes. So, as a result, our techniques are slightly more general. We could, in fact, use read-solving codes in our construction to get the same complexity as Ligero, but since they use this special property, they could actually instantiate their protocol with our error-correcting codes to get the same results as us. Thank you. Let me ask one more. So, can we apply Fiat Shamia to obtain a non-interactive version of your group? Yeah, you can apply the Fiat Shamia transformation and get a non-interactive protocol. Thank you. Another question on the comments. Let me ask one more. What kind of functions can we have linear time coordinates on the first show or one-way function? Let's see. I forget the assumption which the many-time only function is based on, but as for the many-time collisions in hash function, that's based on... If I'm not mistaken, that's based on an assumption. It looks like a last assumption or something close to an assumption from current build. Sorry. Yeah, that looks like an assumption that comes from coding theory. It's not like LW or LPN. No, it's not LW, LPN. It's a different assumption. If there are no questions, then let's answer the speaker and all speakers in this session.