 Γεια σας, είμαι Αλέξαν Δροζαχαράκης από Πομπέο Φαβρα-Ευνηυρυσίτη και θα δούμε την συμμετέχεια της καρδιάς, η αντιμετωπική προσπαθή και της ευκαιριστικής δημιουργίας. Είναι η δημιουργία με Βανέσα-Δάθα και καρδιά Ραφόλ. Φρώνω να παρακολουθούμε ένας δημιουργός δημιουργίας, έχουμε ένα δημιουργείο και τη συμμετέχεια, και δείχνουν ένα σχέδιο κομμάτι, που πιστεύουμε να είναι πραγματικά, δημιουργείος που κρύψει αυτό το σχέδιο κομμάτι. Τώρα η δημιουργή πραγματικά πιστεύει ότι γνωρίζει το Βιντεσ-Χαύλου επαλύσχασης τράπεζε στις ψαδίες. Η δημιουργία να κράσει ένα Π΄Ε, ένα στιγμότητα σταματικά, ώστε να του έφτασης και να φούρει κάποτε δημιουργή. Αν αυτή η δημιουργία είναι μασί Solid, δεδίδει με πιστεύει ότι η δημιουργή γνωρίζει το Βιντεσ-Χαύλου για την δημιουργία που κρυφείται από στους ψαδίες. Οπότε τι είναι οι δημοκρατές που θέλουμε από αυτή την προστασία. Πρώτα, we need completeness, which means that the honest prouver always succeeds in convincing the verifier. We want knowledge oneness, which means that if a prouver creates a proof that verifies, then it should know the corresponding witness. Now for this property we only consider computationally bounded approvers, that is we require it holds against all polynomial time algorithms. And zero knowledge which states that the verifier learns nothing more than the fact that the prouver knows the witness w corresponding to x. Ideally also we want succinctness, which means that the size of the proof should be very small, it should be sublinear in the witness w. So these are all nice, but as we know, no trusted part is existing in the real world. So it remains the issue of how we create this common reference string in a way that we make these properties hold. So in the literature there are two approaches. The first is to use a transparent CRS. So this means that the CRS is uniformly sampled from a set. And the latter is to use multipart computation techniques. Now here the CRS is collectively computed by some committee and we just need to assume that at least one member of the committee is behaving honestly. Now in the former family, in the transparent family things are nice. The CRS can be assumed trusted since we can easily sample a uniform string without somebody manipulating the randomness. But in general these schemes are less efficient either as far as the prouver is concerned or the proof size and the verification. There are a lot of works in this family such as Starr, Klygero, HIRX etc. And on the other hand on the MPC based approach the schemes are in general more efficient. So especially the proof size is constant and independent of the NP witness. But we need to rely on non-standard assumptions such as the knowledge type assumption or idealized models for groups. And this MPC protocol is very expensive. This means that it's really hard to make sure that it is run properly and nothing malicious happens which makes things kind of difficult as far as CRS generation is concerned. Also another bad thing is that each CRS that is created is language dependent. So every time we need to approve a statement for different languages we need to rerun this expensive procedure. Now a lot of schemes exist in this family as well such as GGPR13, GROV16 and many more. Now we can divide the two MPC family into two families. The traditional one and the universal and updatable which was introduced by GROV in CRYP18. So in the traditional one there is a committee and they execute a standard MPC protocol so they exchange messages with each other. And after a number of rounds they collectively output the common reference ring. On the other hand on the universal and updatable model we essentially have an interactive version of the MPC protocol to create parameters. So what happens is that initially one party creates parameters on its own and publishes it and then its party can take these parameters, update them and provide new parameters. And it should give proof that it acted correctly that is it essentially included the randomness that was introduced by the previous parties and so at any point we can stop this procedure and use the CRS and we just need to assume that one party updated honestly. Now this makes things much easier in creating parameters and no very expensive setup ceremonies need to occur and it's a much cleaner approach. Also another good thing is that the parameters are universal in the sense that we can instantiate any NP language given just one set of parameters so we don't need to create new parameters every time we need an interactive zero-knowledge argument for a different language. Of course this has some limitations that is the statement size, the language size is bounded by the parameter size we choose initially. So a few words about commitment so again we have two entities which again we call prover and verifier and they share a commitment key which again we assume to be trusted. Now in general in commitment schemes the commitment key is sampled by the verifier but it will be convenient to assume it is given by a third trusted party. So now the prover has a value x it wants to commit to so it computes using the commitment key a commitment c to this value x and gives it to the verifier and then other things happen the prover and the verifier executes some protocol or they do other stuff and after some time the prover decommits so it gives the value it committed to and some opening information and the verifier can verify this and if this verification succeeds it can be sure that the prover was consistent between these rounds. So the properties we want here is the binding property and the hiding property. The binding means that once a commitment c is given no prover can compute two openings for these commitments two different openings and the hiding means that the value x used by the prover nothing is revealed about this value to the verifier if the verifier sees the commitment c. So let's review the literature for updatable physics so all the schemes in the literature have more or less the same properties so here in this table we omit the initial updatable non-interactive zero knowledge argument of growth since it has a quadratic prover which is very inefficient so as we can see current work in the literature has a quasi linear prover all schemes feature a constant size proof and they rely on either the algebraic group model which is an idealized model for groups or some knowledge type assumptions such as knowledge of exponents In our work we create an updatable nisic which only needs the standard discreet logarithm assumption for asymmetric groups in one instantiation and a Q variant of this assumption in the latter and we achieve this but we sacrifice the optimal proof size and we now have a logarithmic proof in the size of the witness Our prover also is fully linear but it makes heavy use of public operations which are in general expensive concretely So before continuing I will introduce some notation for representing group elements So we fix a generator and we denote an element z to the x as the element x inside brackets We also extend this notation to vectors of group elements in the natural way but when we are in bilinear groups we use subscripts to denote which group an element is referring to Now the starting point for our construction is the BCC et al 16 construction which was later concretely improved in bullet proofs This scheme is based on the discreet logarithm assumption it features a transparent setup but it has logarithmic proof size but the bad thing is that the verification is linear in the witness size So we manage to reduce this linear verification time in logarithmic but we sacrifice the transparent setup for a universal one So I will give a high level overview of how this scheme works So initially we have a language which we encode as an instant of circuit satisfiability So this is the problem we build the NSIC 4 So we have an arithmetic circuit and we can naturally translate it to a set of constraints of quadratic and linear constraints and we can combine this new problem the constraint satisfaction problem with Peterson commitments and given some random challenges issued by the verifier we reduce this to an instance of the inner product language Now in this language the statement is that given a commitment alpha, beta and an element z the prover claims that it knows opening A and B with respect to the commitment keys r and s such that the inner product of A and B equals z And now to prove circuit satisfiability we just need to prove that this inner product relation holds and also it's important to note that this inner product does not need to be zero knowledge So to have an interactive zero knowledge for CSAT we reduce it to an instance of the inner product which is not zero knowledge So how do BCC et al solve this inner product So we have our initial statement here and the verifier issues some random challenge to the prover and they create a new statement for the same language Now the good thing is that this new statement has half the size of the original one or in more general is a constant factor smaller than the original one And now we have that if the new statement is true then with overwhelming probability over the random challenge of the verifier the old one is true as well and the security essentially reduces to the binding property of the commitment specifically discrete logarithm when we instantiate with Peterson commitments Now we have our initial statement and we reduced it to one with half size but the new statement is in the same language Why stop there, we can continue and we can proceed recursively actually and so the verifier keeps sending random challenges and reduces this problem to a new one and after a logarithmic number of rounds the prover can just reveal the witness The statement then is constant and the verifier can in constant time just verify the witness trivial Again note that this is not a problem because we don't need zero knowledge so indeed the prover can give the witness and the bad thing is that the verifier is linear in n Now the intuition for this is that in Peterson commitments the commitment key is uniformly sampled so it has a full entropy so the verifier every time needs to complete this commitment key and compute a new commitment key Ok, so this is what we attack so we parameterize the Peterson commitment by essentially sampling keys under different distributions we will introduce two distributions for this and we introduce a minimum structure in the commitment key that allows two things First of all we can efficiently update the commitment key in the sense that we will become clear later and also there exists a trap door such that holding it allows you to to have a verifier that does constant time in its round so in total the verifier is logarithmic Now how we do this so essentially the verifier using the trouble knows a succinct representation of the statement which does not involve having the whole commitment key in the statement and intuitively you can see that the security holds if these two distributions are indistinguishable since the prouver in the new scheme has no advantage over the standard prouver that was used with the uniform commitment keys So again the security reduces to the binding property of the commitments and the bad thing is that having to know this trap door means that we work in a designated verifier case but ideally we would like a publicly verifiable proofs So let's introduce these two distributions First we have the multilinear distribution So now we just sample logarithmic and logarithmic number of elements of zp and logarithmic in the size of the in the dimension of the vector we want to commit to and we essentially construct all the multilinear monomials of this sample the keys which we denote as bar x and the commitment key is these elements encoded in the group Now if we represent a vector if we think of a vector as the coefficients of a multilinear polynomial and then a commitment to a vector a is essentially the polynomial evaluated at the secret point x1, x2, xlogn and the binding property can be reduced to the standard discrete logarithm assumption with a logarithmic security loss Also we have the PW distribution where we essentially do the same thing with univariate polynomials That is now we just pick one element and we just compute the powers up to n-1 of this element Now the binding property reduces to the acute type variant of this discrete logarithm assumption Now what happens is that the inner product statement can be seen equivalently in two ways On the left hand we see how the prover sees it So the prover sees the whole commitment key and essentially it sees what it saw before But on the right hand side we have the verifiers view Now we can see that knowing this trapdoor the verifier can encode the statement very differently It can say that the prover claims that it knows two polynomials represented by two vectors such that the evaluation of the secret points x1, xlogn and y1, ylogn are alpha and beta and the inner product of their coefficients equal to z So we can see that the size of the letter statement is much smaller in its logarithmic while on the other hand the first is linear So we have these two different representations and the prover becomes essentially the same So for the prover it just sees these commitment keys which are indistinguishable from Unifor so it cannot do much more and the verifier just does not read the whole statement as the prover does it uses its trapdoor to encode So again intuitively security holds under the indistinguishability of the two distributions but we can reduce it essentially to the binary property of the commitment key Now how do we achieve public verifiability? What we do is we compile the previous construction using pairings, using groups that have a bilinear map and now instead of having the verifier knowing some trapdoor we just encode the trapdoor in group 2 So the new key now is determined by the old key and the challenge and the prover has the whole key so it can compute the new one but it also gives some group elements that help the verifier to succinctly verify it So essentially the verifier asserts that the new statement is the correct one with respect to its challenge by using the pairing operation as a difficult oracle So again security holds under the binary property of the commitment key So this is now the new picture we compute in a pairing group now the same elements and we give the whole commitment key in group 1 and we give the trapdoor encoded in group 2 Now the important thing here is that again the given only the verification commitment key the commitment key is fully determined So we can again write succinctly statement of the form I know an opening for a commitment that satisfies this property because the part of I know an opening for a commitment can be represented using just the logarithmic elements encoded in group 2 So we also introduce updatable commitments So this is essentially the same as the non-interactive zero-knowledge argument part but for the commitment keys So how we model the binary property Essentially the prover first samples a commitment key which we denote to key 1 and the verifier, the challenger verifies its structure and it updates its key and gives it to the adversary So the adversary now has CK2 and the adversary is challenged with outputting a commitment key a commitment and two different openings such that the new commitment key is an update of the previous one with respect to the verifier update algorithm and the two openings for the commitment it gave are different and they indeed passed the verification procedure So now this is how we run these operations So these are the commitment keys and in both cases to verify setup we just check a bunch of DDH relations that should be true in the commitment key and now to update what we do is essentially we pick a second commitment key for which we know the traptor So the traptor is TI and we pairwise multiply with the previous key So we can do that since we know the traptor and to verify that we didn't ignore the previous key and computed the new one and so the randomness of the previous commitment key is still there We just use standard proofs of knowledge to show that the discrete lock between the new commitment key and the old commitment key is known to the prover This is the same in the power pw distribution but now we just have one element to represent the key in group 2 and we essentially do the same thing Now in the ML case to prove correct update we need logarithmic discrete lock musics but if we want we can even reduce this so we can use standard sigma protocols for this but we can also use some transparent schemes such as bullet proofs to reduce the proof size even further so instead of all of log n we can have all of log log n In the second case the proof update is constant which is optimal and now this is the new picture of how to to handle the sysad case so again we start with an arithmetic circuit we translate it to a set of constraints and we use the new commitment key now the parametrized business and commitment key Now using this we can derive deterministically a CRS for the specific language we want to prove and again given some silence from the verifier we can reduce this to an inner product statement as was the case before Now So this is the reduction to the inner product statement Now as we saw the inner product statement can be done in logarithmic time with logarithmic proof size which is good but the bad thing is that the reduction itself is linear in the circuit size So this is a problem that existed in previous works as well and essentially to handle it we proceed as in sonic so we just very process the circuit essentially what we need is to have a circuit that has a bounded fan in and fan out and then we delegate the computation of the new statement to the prover by using a permutation argument So this is very similar to sonic it's essentially the same technique but it's quite technical and we need to translate this technique to our setting which is quite different from sonic The good thing is that indeed we can achieve in total linear verification time but now we have some overhead for the prover in this delegation part which concretely can make things much lower So but what happens for specific languages that may have some structure So what we do in general is we start with a language and we translate it to a CSAT instance translate this CSAT instance to a set of constraints and proceed as before but it might be the case that some languages have some natural structure and can be naturally represented as a set of constraints So if this is the case we can just remove this state to reducing to CSAT and directly proceed in writing the constraint system Now if this system is simple then no delegation is needed the verifier can just compute the statement itself This actually should be the case for many let's say natural languages so they can be naturally represented as constraints and usually these constraints are not very complex So we demonstrate this in our work by presenting a proof system for range proofs So what we do is we follow the blueprint of bullet proofs but we manage to not use this delegation part and use a new inner product argument which results in an exponentially faster verifier So that's it Thank you for watching this video I will be happy to answer any questions or receive any feedback So if you can see my email address please make me Thanks