 Thank you very much. Hi everybody. So this talk about is about how to remove the strong RSA assumption from arguments over the intake girls. Okay, that's not what I want and So you know what a commitment scheme is there is a sender that has some message You can lock it into a box and the box hides a message You cannot see what's inside but at the same time when the sender later reveals a message that's inside the box He cannot change his mind the message So what we'll look at today is a commitment scheme that has been introduced in 1997 by Fujisaki and Okamoto Who which has a nice property which is that it allows to commit to a message M which is over a group of unknown order and This has proven very useful This commitment scheme was used in a large number of application just to name a few it was used for to build MPC and It was credential leak harsh wrench-proof auctions password protected secret sharing and so on So the main reason why it enjoyed all those applications is Well, there are two main reasons the first of which those groups of a known order. We are talking about naturally occur quite often in crypto Because if you take like an RSA group where the modulus is the product of to save prime Then the subgroup of squares of the multiplicative Group RSA Are of a known order because knowing their order is equivalent to knowing the factorization So essentially this kind of groups of an order naturally occur in crypto as part of larger systems And in these cases that this commitment scheme might in general can prove useful But another reason maybe the core reason is that if you can't commit to an element over a group of a known order Intuitively the committer should be bound to the value over the integers Because if later he's to reveal the committed value He cannot reduce this value modulo the order of the group which is unknown So he somehow will have to reveal this value over the integers and this has many application when you want to Deal with statements work with systems that will work with really integer values So for example, if you want to consider a statement of the kind Proving that some commitments commit to a value which is bigger than 10 This is a non-algebraic statement. So it's not easy or efficient to do that with standard commitments over Say primordial groups, but if you can directly commit to integer then it becomes way easier to handle that and In fact those application that I listed here I know are not directly based on the Fujisaki Okamoto commitment scheme But they are mostly based on a zero-knowledge argument of knowledge of an opening to this commitment So a proof that the prover knows a witness that allows him to open a Fujisaki Okamoto commitment and So this is a basis for a large number of application But the issue is that well the Fujisaki Okamoto commitment scheme has nice properties It's perfectly hiding. It's it's binding under the factorization assumption On the other hand the security of the zero-knowledge argument of knowledge of an opening to the Fujisaki Okamoto commitment scheme Is a bit less understood it was proven to hold under an assumption which is called a strong RSA assumption So without giving many details right now let's just say that it is a less standard assumption that for example the RSA assumption and in some sense less desirable and So this means that all those application that I mentioned that directly rely on this zero-knowledge argument of knowledge Have also security that reduces to the strong RSA assumption So our contribution in this work was to revisit the security argument the security proof for this zero-knowledge argument of knowledge more precisely the security proof for the sonness for the witness extraction procedure and We've shown that in fact the same protocol can be proven secure under the standard RSA assumption Instead of the stronger RSA assumption and I insist on the fact that we've not made any change to the protocol It's not a new protocol whose security relies on the RSA assumption So all the application that I mentioned directly benefit from our improved security analysis and their security Can now be proven directly under the standard RSA assumption So how do we do that before I enter in all the details? Let me give some Pre-reminaries on the RSA groups So we will consider Zn where n is the product of to safe prime and We will look at the subgroup of quadratic residues mod n, which is a subgroup of squares The multiplicative subgroup of squares and its order as you can see depends of P minus one and Q minus one Which means that it is unknown because knowing this order is equivalent to knowing the factorization So what kind of assumption can we have in such groups? Well, the most natural one is the factorization assumption given the modulus it shouldn't be feasible to find in polynomial time its factorization Another very standard assumption a widely studied one is the RSA assumption given some u and some Exponent x it should be computationally infeasible to find some v so that v to the x is u modulo n And in fact when I stated that way, it's not directly a single assumption It's more like a family of assumption because I've not explained yet How we pick the exponent x how it is sampled in the assumption and several flavors are common in crypto Maybe the most common one in the theoretical community. It's just to assume that the exponent x is picked at random over the entire range up to some constraint of being coprime with with fire them While in practice what the most common practice is to pick a fixed exponent like that one and To stick with this exponent So what we will consider in this talk is neither of the two will consider a variant of the RSA assumption In which the exponent is sampled at random But in a small subset in a polynomial size subset and the last assumption I want to describe is the previous one the one of the on which previous work were based which is a strong RSA assumption So some RSA assumption really looks like the standard RSA assumption in the sense that The prover also has to find some a root of some challenge you with some expand x mod n But the core difference is that now there is no issue anymore on how we will sample This exponent because a choice of the exponent is entirely left to the adversary So the adversary must solve intuitively some RSA challenge for any exponent of its choice And what makes some of this assumption maybe less desirable than this standard RSA assumption? Is that there are for a single challenge for a single you there are exponentially many solutions that? Prover could come up with while whatever the way x is sampled in all the flavors of the standard RSA assumption There is only a single valid answer Although we do not know how to break the strong RSA assumption. It makes it seem a bit less desirable as a search assumption So that was for the preliminaries. So now let's dig into this security argument for the extraction for the knowledge extraction of the They're on an edge argument of knowledge of an opening for a Fuji sake your comm auto commitment So now you can see what the commitment looks like and most of you might recognize Something which is exactly like the Pedersen commitment scheme a commitment is a sheet to the mh to the R The only difference is that now we're working over a group of a known order Which is a group of quadratic residues mod n instead of of a Prime order group in the case of the Pedersen scheme and the argument of knowledge is Again, exactly the one you might be used to the snore protocol Where the prover for sense of random commitment receives a challenge This is a sigma protocol and then compute some linear answer using his witness and the the random coins and the challenge and The verifier will perform some check on top of that and if the check pass then he accepts the proof Then he is convinced convinced that the approver knows some pair mr So that g to the mh to the R is the commit the committed value So how how do we usually prove the sonness the knowledge of extraction of this protocol in? So usually the standard method is to say well if I am if Alice is playing well If the verifier is playing with a malicious prover then he will be able to extract a valid witness Using rewinding so by using rewinding essentially the two last rule will be repeated twice So as to console the randomness introduced in the first floor and by doing so Essentially, we can compute mr as follows Well, but here we have there is an issue Which is that we cannot compute inversions. We cannot compute we cannot divide because those The T e are exponents Over in a group of a non-order so it is infeasible to compute as to extract this witness directly So instead we'll have to provide a refined argument for that So let's me put that aside and so instead of trying to approve Unconditionally that this protocol is sound. I will try to prove that the sonness hold Competitionally, so we will replace our very failure by a simulator Whose gold will be to either extract a valid witness in an interaction with the malicious prover How to solve an RSA challenge sent by some RSA oracle and if you can do one of the two then we are we are fine We can prove that way that the sonness of the protocol relies on the RSA assumption So again, we're going to use rewinding so the two Flows are repeated twice and to simplify a bit the notation this is zero minus e1 is zero minus two Is zero minus the one and so on our just renamed them them z e and t So we have a relation where we have conserved the randomness We have this come to the e is g to the z h to the t But as we know we cannot divide by e so we cannot just compute z over e and t over e to get a witness So we'll have to consider several cases The first one is the easy case e divide z and e divide t In that case we can just divide over the integers and return a valid solution to the commitment I'm hiding a bit. I'm glossing over some technical details. There is also the issue of this sign, which is just a technicality but essentially we can just Use the over e and t over e as a witness and we are done Now there is the Non-trivial case the second case where either this e doesn't divide z or it doesn't divide t So I just cannot compute my witness directly and for this case There is a nice argument So before that, let me just rewrite this g to the z h to the t as h to the alpha z plus t h to the alpha is g So we have this h to the alpha z plus t and there is a very nice argument That was used in 2002 by Damgard and Fujisaki to show that in this case if probability one over two Then he cannot divide alpha z plus t Why does it all because this alpha was picked at random by the verifier before the protocol in some large enough set So that h to the alpha only only leaks intuitively the first half of the bits of alpha because alpha is way larger than the modulus of the Of the group for the exponents So this probability is an information theoretical probability. Whatever the adversary does When he doesn't divide z or he doesn't divide t there is a probability at least one over two that he doesn't divide alpha z plus t and Why do we want that? Well because again without showing all the details in this case we can apply a nice trick which is known as Chamiere's JCD in the exponent trick and Find some pi and some v So that v to the pi is h where h is essentially our challenge up to some sign and I'm showing the the the formula for the pi because it will be important to Notice that pi divides e pi is some values that divides e which is the difference of the two challenge sent by the Simulator so up to some sign again When this happen we can solve a strong RSA challenge With an exponent pi but as you can see this exponent pi depends on the NT it depends on the answer of the prover So intuitively it's not obvious that we could force the malicious prover to solve an RSA challenge for a fixed For a challenge of our choice. Somehow he has some freedom in choosing the exponent with which he will solve the challenge And in fact, we do not prove that we can force him to use a fixed exponent But our results start from this point and realize on a crucial observation Which is that? Pi cannot be too large the exponent with which the malicious prover will solve the challenge cannot be so large Why cannot it be so large by so large? I mean it cannot be bigger than 8 over epsilon where epsilon is the success probability of this malicious prover Why can it be that loud cannot it be that large so recall as I show you before that pi divides e, okay? So in this case, we will rewind the protocol once more We will Rewind it a third time Hoping to get a third accepting transcript. It will happen with some non negligible probability And why do we do so because well there was this Come to the e where we couldn't divide by e. So I'm just Removing come entirely so now I get two equations come to the ease j to the z h to the t and the same one with the new Rewind and by doing some cross product. I can just remove the part that depends on the commitment on get some relation G to the a is h to the b for some a b so with this relation intuitively So we there is a Nordic theorem that explained that if we can find some pair a b so that g to the a is h to the b Then we can solve the factorization Unless the pair that we find is trivial unless a equal b equals 0 This is the bad case. So what I'm going to show is that when pi is too big This bad case cannot always happen and if it cannot always happen then we can solve the factorization with some non-negligible probability Why cannot it happen? Well? Just rewriting what does it mean for a to be equal to b to be 0? essentially what this means is that the same exponent pi will be a Always the same the exponent with which the malicious perverse solve the strong RSA challenge will remain consistently the same in several rewinds But it cannot easily remain consistent consistently the same Why because we know that pi divides e similarly pi prime divides e prime by definition. He's constructed that way But e prime is random somehow pi is some fixed value now And we're doing a new rewind and this e prime is a new difference between Challenges that are picked uniformly at random by the simulator So what is the probability that some fixed value divides a random bit big value? Roughly the inverse of the value So essentially the probabilities at pi divides a random e prime will be roughly epsilon over 8 But you know that over all the the that epsilon transcript out of all transcript are winning transcript for the malicious perverse So as only epsilon over 8 of them are transcript in which pi Divide e prime most of them must be transcribed in which pi doesn't divide e prime and in this case Pi cannot be equal to pi prime So to sum up it gets a bit messy, but essentially We can show that a large fraction of the trend of the winning transcript of the malicious perverse Cannot possibly Verify pi equal pi prime because essentially e prime is a bit too random for that and Pi is too big to divide consistently a big random value So if we are in the situation where pi is too big then we can factor the modulus with a one over polynomial probability And so under the actualization assumption, we know that this case cannot happen Is this case cannot happen? Then we can assume that pi is small pi is smaller than 8 over epsilon, but then we're done Why are we done because remember that we're considering a variant of the RSA assumption? Where the exponent is big uniformly at random in a small set what small set then you can guess it's between 2 and 8 over epsilon and What we ensure is that in during the interaction between the simulator and the malicious perverse no information at all leaks on this exponent X So this exponent X is uniformly around them So our malicious perverse will indeed solve a strong RSA challenge But there is some non-negligible probability that the strong RSA challenge that he solves is exactly the RSA challenge that we received from the Oracle Because that's one is small enough So we know that we've epsilon a probability proportional to epsilon He will solve our RSA challenge even even without knowing that he did So overall we can either extract a pair mr That's a low what that's a witness for this commitment or with some probability which is Related to epsilon cube we can solve an RSA challenge So that that was it and essentially this result I focused on the case of zero knowledge argument of knowledge of an opening But it extends to essentially any zero knowledge argument for integral relations for relations between committed values over the integers and in particular it extends to the very interesting case of Range proof and those range proofs are many interesting applications. So range proof are proving that some committed value It belongs in some a range belong in some interval So range proof are many interesting values. So in particular, they are used in many of the applications that I mentioned in one of the first slides so Essentially our results extend to a large number of Systems that rely in one way or another on Zero knowledge argument over the integers and it shows that their security can be based on the RSA assumption instead of the stronger RSA assumption The paper also contains a certain part another contribution, which is quite independent Where rather than looking at the security of the zero knowledge argument over the integers we focus on their efficiency And I want to have time to cover that in details But so I invite you to read the paper because it's also an interesting part Essentially, we can we show that we can convert a Fujisaki Okamoto commitment, which is an integral commitment Into a what we call a genero commitment because we found that in for the first time in a paper of genero which is a commitment modulo small prime and Essentially, we use that inside a zero knowledge argument of knowledge to reduce The size of the object we're working on but after the prover has committed over the integers So essentially the prover will be bound to values over the integers But all the verification will be performed by the verifier modulo a small prime value because of because Intermediately we are converting those commitments to commitment modulo small values and so by doing so we can make the Verification of those zero knowledge proof way more efficient for example for the case of French proof the computation of the verifier is like ten times slower ten times three more efficient and So just to mention for example an interesting open problem So we know that there are very interest there are many results on building short RS Signature whose security reduces to the RSA assumption, but those signatures are usually non algebraic So it's not easy. They are competition in not that efficient. It's not easy to build To build proof of their Proof of top on top of it and while we have very good strong RSA base signature Which are algebraic which are short so for example Could it be possible to adapt our technique to build some strong RSA based looking signature? Whole security could in fact be reduced to the standard RSA assumption and that concludes my talk. Thank you for our intention any question Is your reduction uniform or non uniform first and then how is the concrete? Security loss of the reduction is it terrible or is it so the concrete security laws so uniform or non uniform I'm not sure Because I'm not sure exactly of what it means precisely if you can Let's take it offline. Okay. Thank you But so the concrete cost of their reduction is essentially epsilon cube over 100 Where epsilon is a winning probability of the adversary So it's essentially we lose the factor epsilon compared to the previous reduction that reduces to a stronger essay