 I'm Patrick Doua from ETI Zurich, and this presentation is on the 16th Diophontanic Sensitivity Ability Arguments. It is based on general work with Damian Value. A Dermphotian equation is a multivariate polynomial equation with integer coefficients, and for which the solutions are also sought in Z. Metisevich proved in 1970 that the problem of deciding whether any given polynomial equation has a solution is undecidable. So he gave a negative answer to Hilbert's 10th problem. But it may be possible to prove or argue knowledge of a solution if it is known to a party. The fontan equations are relevant in cryptography and computer science in general, and several problems, and especially several NP-complex problems, can be encoded as polynomial equations. These, for instance, include Sturckitz-Sat-3-Sat, the graph-colouring problem, the Hamiltonian-Sacken problem, and the integer-linear programming problem. Most specific to crypto and among other problems, the problems of proving knowledge of an RSI signature or of an ECDSI signature can also be encoded as the Dermphotian equations. Even the problem of proving that two lists of committed values, are permutations of one or another, can easily be represented by such an equation. And this problem has applications to voting schemes and mixed nets, for example. But Sturckitz-Sat-3-Sat-3-Sat arguments already exist and are given in groups of prime orders. So one may wonder why it would be useful to additionally have arguments for their fountain satisfiability. The issue that may occur with arguments in groups of prime orders is that, for certain problems, there may apparently be no upper bound on the size of the witness, as it is, for example, the case for the integer-linear programming problem. It means that if the parameters are generated before the problem instance is known and the group order is too small, then one may not be able to use the arguments as the witness will be reduced modulo of the group order. Compiling the problem into a Sturckitz-Sat-3-Sat problem can also incur a significant overhead for certain problems. And it may actually be difficult to even write down an appropriate circuit for the problem. On the other hand, most problems can naturally be written as polynomial equations. So if one could directly argue satisfiability of such equations, one would not have to worry about the compilation at all and could instead directly use the argument. And giving an argument for their fountain satisfiability is the problem that this paper tries to solve. To argue over the integers instead of residue classes modulo of prime, the solution is often to use hidden order groups. These are, for instance, Z and star for an RSE modulo of n and ideal class groups. Some assumptions over these groups were formulated by Damgarden-Fujizaki, and there are essentially a generalization of the strong RSE assumption. And the assumption that is difficult to compute small order elements, except for elements of order 2, since minus 1, for example, is an element of order 2, but is also easily easy to compute. There are two other assumptions that I didn't mention, but they are less critical for the understanding of this presentation. As arguing over integers may require to be able to commit to them, the first step is to construct an integer-to-commitment scheme. Damgarden-Fujizaki already proposed one which is similar to the Pedersen scheme, but which has crucial sort of differences. The first one is that it is over a group of hidden order, which may not even be cyclic, and only an upper bound on the group order is known. In the RSE group case, an easier upper bound is just the RSE modulo of n. Then to generate parameters, one samples a group element h, choosing exponent alpha over a set of integers with size at least 2 times lambda times the group order to be statistically close to uniform and then one computes g as h to the alpha. It is paramount to have g in the circle changed by h to guarantee that the scheme is hiding. Without it, it cannot be sure. Of course this problem does not occur in the case of primordial groups as they are cyclic, but it's not the case of hidden other groups. Now to commit to an integer x instead of the residue class modulo p, one first chooses randomness r from a set of integers with size at least 2 times lambda the group order. This is to make sure that the statistical distance between any two commitments is at most 2 to the minus lambda. The rest of the computation is just like for Pedersen scheme. To open a commitment, it suffices to check that c-square is equal to gx hr square. The squaring is simply an artifact to later allow for efficient arguments of knowledge of openings. It means that the scheme will still be binding without the squaring, but it would then not be possible to efficiently argue knowledge of openings under the assumptions on the group. The main underlying reason is that elements of order 2 may be easily computable and therefore one must relax the commitment and opening equation. As mentioned before, for the scheme to be hiding is actually crucial that g is in the subgroup generated by h. It means that if the party which computes g is not trusted, then it should also output a proof that g is indeed in the subgroup generated by h, as there is no efficient way to actually test that. Damgaard and Fusezaki gave a proof of discrete log in the subgroup generated by nil and h and it's simply an adaptation of Schnor's protocol with 01 as challenge space. The first problem with this proof is that it must be repeated log square lambda times to reach and sum this error of 1 over lambda log lambda. But there's actually no reason for the parameters to be frequently refreshed. The most serious problem is that the parameters are large because of the proof, especially if one were to commit two vectors of integers instead of a single one, as it would then be necessary to do the proof for each of the bases. And here comes our first contribution, a new integer commitment scheme. One first computes g as before but now argues that g square is in the subgroup generated by h square instead of proving that g is in the subgroup generated by h. The benefit is that one can use a much larger challenge space and achieve the same soundless error with a single protocol run and have much smaller proofs. The complexity is here measured in bits and not in terms of group elements or with number of integers, as the integers that are in the proof could be arbitrarily large because there's no modular reduction. The difference with Damgarten-Fujizaki schemes is even more pronounced when there are several integers to commit to, say n, as the argument is now of size bigger of bg plus log n instead of omega of nbg log square lambda. And the technique used will be shown later. Since the argument in the parameters only guarantees that g square is in the subgroup generated by h square, the computation of the commitment must now be squared. And the opening must be a power to higher to later again permit efficient arguments of knowledge of openings. To understand why the parameters of the new scheme are much smaller than for Damgarten-Fujizaki, consider a problem of proving this arguing knowledge of discrete log in the subgroup generated by an element h. The general outline of the protocol is the same as North's protocol. But the challenge space and the range of the randomness are not yet specified. Damgarten-Fujizaki used 0, 1 as challenge space and the reason is that in the extractability proof, when one gets such an equation for distinct challenges c1 and c2, c1 minus c2 is either minus 1 or 1. So one can extract an integer alpha such that g is equal to h for the alpha. The challenge space of the argument for the new scheme is instead 0 lambda log lambda. The consequence is that when one gets such an equation in the proof of extractability, since c1 minus c2 cannot be inverted or the unknown order of h unless it is minus 1 or 1, then all one can say is that if c1 minus c2 divides over the integers r2 minus r1, then this equation must be satisfied. Under the assumption that elements of small order are hard to compute except for elements of order 2, one can conclude that the expression in parenthesis is of order 2 and is the best that can be said. This expression then yields a d log for g2 to the base h2. It identifies as to prove that c1 minus c2 divides r2 minus r1 with non-negligible probability under the assumptions of the group. Now for the zero-knowledge property, it is important to choose the range of k large enough to hide the witness, but not unnecessarily large as it impacts the size of the proof of response. First we call that alpha is at most 2 to the lambda times the order of the group. Then c alpha is at most 2 to the lambda times the order of the group times the maximum value of the challenge. Now if k is chosen from the set here represented in green, it means that if r falls in the areas represented in red, then the response makes some information about the witness alpha. In the simulation k is chosen here from the random from the green set. t is set as h to the k times g and r is set to k. The statistical distance between the simulated transcript and the real one then depends on the size of the red area compared to the green. If k is then chosen from a range 2 to the lambda times the maximum value of c alpha, then the statistical distance between the two distributions at most 2 to the minus lambda plus 1. To argue knowledge of several discrete logs at once, the idea is simply to compute a linear combination of the elements. The underlying reasoning is that if the prover knows the delog of each gi to the base h, then she must be able to argue knowledge of a representation of the linear combination in the zid module generated by the gi elements. Now that integer commitments have been constructed, the next step is to build an efficient inner product argument over the integers. Or in other words a protocol to argue that an opening AB to a commitment is so that the inner product between A and B is equal to a public integer zid. Actually the argument later used for the fountain satisfiability is one in which the inner product is also committed. And it's here again important that all the bases gi, h, i and e are all in the subgroup generated by the base of the randomness which is here f. The idea of the inner product argument is to use the halved and recursed techniques that appeared in bullet proofs and are themselves reminiscent of the techniques due to buta leal at crypto 2016. Europe 2016, sorry. The main difficulty in the present case is that zid is not a field as zp. And therefore one cannot invert modulo the unknown order of f. Whereas bullet proofs heavily rely on the invertibility of zp elements, especially to prove your extractability. As an example, consider the case in which the integer vectors are of size 2. The proof starts by committing to a first half of cross terms in u. And then she does the same for the other half in v. Sends u and v to the verifier and the verifier sends back a challenge x. The proof continues by computing an integer linear combination of a1 and a2 as well as for b1 and b2. And it's important to notice that the resulting combinations a and b are of half the size of the original vectors. Now the main observation is that by adjusting the randomness in the value t, one obtains an opening relation for a new commitment with basis and a commitment that depend on the original basis, the original commitment c and the challenge x. And importantly, the size of the randomness is now half the size of the original witness. Is it possible to recurse with a new witness that is half of half the size? Not also that none of these computations require to invert an integer module in unknown order. I also did not specify the randomness range for u and v, but the idea is the same as in the previous experiment. The main technical issue is that the size of the randomness t grows at each step of the recursion. It means that at least the last step of the protocol, the integers that the proof of sense also grow. Is then important to adapt the ranges of u and v so that the argument remains statistically zero knowledge, but its size does not increase too much. Now to understand how a witness for a higher recursion step can be extracted, consider three successful rounds with three pairwise distinct challenges and the following linear equation. The idea of this equation is just to eliminate u and v from the equation above. It is the same idea as for bullet proof. The problem is now that even if the challenges are pairwise distinct, the linear equation may not have a solution over the integers, whereas it would do, for instance, over fields. Luckily, the following equality with the i-jugate matrix of x holds over the integers with the i-gaussian elimination. So one can express c to the two times the determinant of x in the original basis, but it would be better to have an expression for c-square without the integer determinant of x. We prove that under the assumptions on the hidden order group, two times the determinant of x must divide the exponents on the left-hand side of the equation, all of them. It is the first major difference with bullet proof in the proof of extractability. c-square can then be expressed in the original basis, and similarly for u and v, considering appropriately linear equations. By plugging these representations in the equation above, one then obtains a discrete log relation in the sub-quad generated by f. We again prove that under the assumptions on the group, which are vastly different from the standard delog assumption, that each exponent must be nil, and a witness can later be extracted for the higher step of the protocol. And this is the second major difference from bullet proof in the proof of extractability. So basically proving that delog relations are not possible in sub-crups to introduce it by, and then randomly sample m and f. Now using the inner product argument over the integers, we show how to argue knowledge of vectors a-l, a-r, and a-o that satisfy a Hadamard product, and linear constraints which potentially involve integers in the vector v that are committed. This form is just a representation of a circuit with left and right inputs in a-l and a-o, and a vector of outputs a-o. The inputs are a-l and a-r, of course. The linear constraints here are presented by the w matrices and the constant vectors c simply ensure consistency between each depth level of the circuit. But a-l and then the bullet proofs give log size arguments for such arguments in zp, but not in zt. Several technical issues then arise again from the fact that zt is not a field. And these details are, and these issues are detailed in the paper. The last step to argue delphantine satisfiability is then to give an argolium that turns any delphantine equation, or rather any multivariate polynomial equation, into a Hadamard product, and linear constraints of this form. We give an explicit algorithm to do so. It is inspired by Schoen's method, which dates back to the 30s, and which consists in reducing the degree of the original polynomial to a polynomial of degree 4, of degree at most 4, by introducing new variables. Take this be very equational as an example. The first step is to introduce a new variable u, which represents x squared, v, which represents xy, and w, which represents ux, which is equal to x to the power 3. The next step is then to insert these new variables in the original polynomial, and enforce the relation between the new variables and the old ones by adding the terms u minus x squared, and squaring that. All the terms are squared since the sum of square values is 0 if only if all of them are 0. From this new polynomial, one can then directly infer the Hadamard product here, so u is equal to x squared, v is equal to xy, and linear constraints. And one can then use the argument for the previous relation, so this relation that is here, over the integers. Damgar and Fujizaki had already given a multiplication argument over committed integers. It means that if a polynomial of total degree delta requires m multiplications, then one would have to compute 2m plus 1 commitments, and m consistency arguments to argue for the satisfiability of the related equation. The resulting communication complexity is then at least of disorder, but with our commitment in arguments and polynomial degree reduction algorithm, the communication complexity of our argument is rather than of these many bits, and in many cases it shows that there is like an exponential decrease in the size of the argument, and here h represents the height of the polynomial. The paper applies these techniques to some of the problems mentioned in the introduction, and gives estimates for the resulting sizes of the argument. As closing remarks, it might be worth investigating whether such arguments can be aggregated to, for instance, argue knowledge of several signatures at once, and also if the verification time, which is for now linear in the bit length of the witness, can be significantly reduced, and this could have potentially many other applications in different contexts. That's it for this presentation. Thank you for your attention, and please feel free to send us an email should you have any question.