 I'm Patrick Doua from Ichizurik and this presentation is on public key generation with verifiable randomness. It is based on German web with Olivier Blasi and Damien Verneuil. Every crypto system starts by specifying the key generation algorithm and is usually assumed that it has access to perfect randomness. But in practice, true randomness is rare and one may only have access to very few truly random bits. This gap between theory and practice can have serious consequences on the concrete degree of schemes. For example, Lance Royale did a science check in 2012 and found that a substantial amount of RSA moduli on the internet shared common primes. It means that if the key owners were to realize that, they could sign messages on behalf of others. Hanning Jerny Royale later identified a cause as the low entropy at boot time when keys were generated. All the consequences of randomness failure in practice were demonstrated by Lemnemeckayal, who used Coppersmith's method to efficiently recover private keys from RSA public ones because float implementations only selected specific primes instead of uniformed random ones. Those were the so-called rocker attacks and several reward-certified devices were shown to be vulnerable. Jules and Guajardo already considered the PKC 2002, this problem of randomness failure in the generation of asymmetric keys. They introduced an authority here, Bob on the right, that is in charge of providing the user, unless on the left, with extra randomness so that they can together generate keys that are close to uniform, and Bob should of course only learn the public key. The model even considered an essential sources of randomness. Please note that the goal is here to guarantee to the end user, the human, that her keys were generated with high entropy randomness, and that her potentially float implementation has not weakened the key generation procedure and leaked information about her secret key, be it intentionally or not. Of course the computer could leak the secret key in other ways, but Bob would only certify keys that he knows to have been generated with high entropy randomness. The first requirement of such a protocol is that if Alice has access to high entropy randomness, then even Bob should not have any information about her secret key. The second is that if Alice or Bob has access to high entropy randomness, then no adversary other than Bob can infer more information about the resulting key that he would have about the resulting secret key that he would have if the key were generated according to the specified key generation algorithm. The last one is that if Bob has access to high entropy randomness, then Alice's computer cannot influence the generation of the keys and potentially leak information via the public keys, meaning that it cannot use the public keys as covered channels. In this sense, Bob ensures Alice, the human, that she can securely use her keys. And the Roka attacks precisely exploited the information leaked via public keys to efficiently recover the private keys. On the other hand, Jews and Guajardo protocol for RSA keys did not exactly guarantee the third property, as log lambda bit capacity covered channels were still possible, which may actually pave the way for the attacks. In addition to that, the model did not consider multiple sessions that could share quality randomness, even though it's precisely what was the cause of the vulnerabilities highlighted by Lenstra. To understand the difficulty of the problem, consider the simple case of generating discrete log keys in a group of primordial P. Natural idea would be to have Bob send his randomness XB to Alice, and Alice would simply compute the secret key as XB plus randomness XA modulo P. Alice would then compute and send back the public key Y as G to the X, and she would also prove to Bob that she knows the other part of the secret key, and then Bob would simply verify to prove. The issue with this approach is that Alice could pick a specific value for the secret key after seeing Bob's value, and this would violate the third requirement. Another idea could then to have Alice first commit to a randomness before Bob sends XB, and then in the proof Alice would also show that the other part of the secret key is what was committed in the first round. But now the problem is that the properties of commitment schemes and zero-nose proofs are only guaranteed with perfect randomness. And Alice does have access to it in this more realistic model. This is, by the way, an aspect that was overlooked in Jules and Quarantro's protocols. It then turns out that the case of discrete log keys is actually not as simple as one might expect. Our first contribution is a new model for this problem which considers multiple sessions and adversarial sources of randomness. The adversary is two-stage, and the role of the first one is to provide randomness to the parties. It can impersonate either of them during multiple concurrent sessions, if dropped on the communication, and even change the messages sent by the parties. Then at the end of the game, the second adversary must distinguish the public key generating in one of these sessions from a key generating with a key generation algorithm on uniform randomness. It should be stressed that the two adversaries can only agree on a common strategy at the beginning of the game and cannot communicate afterwards. The reason is that in the protocol, Alice would have to prove in some way that she involved Bob's randomness in the generation of the key in the process. And this proof could then be used to leak information about the secret key. And then the first adversary could use this to send some information to the second. Alice could, for instance, restart the proof until the first three bits of the proof match those of the secret key. And nothing can be done about this. And it is a minimal requirement to prevent communication if one wants to suppress subliminal channels or cover channels. This technical restriction could correspond to the fact that in practice, an engineer which implemented a faulty algorithm for Alice may not necessarily also be able to eavesdrop on her communication later on with Bob. It is also the reason why our model is not in the UC framework and does not grant you composition, since one would then have to consider local adversaries for the same reason. And it would then have changed the target of the paper. Another important observation is that Alice can, in any case, halve the protocol and restart if the public key doesn't match a certain pattern. For instance, if the key does not start with three zero bits. In other words, she would do a form of rejection sampling. Nothing can be done about this. And it is the only narrow band subliminal channel that is allowed by the model. This could, in practice, be prevented by having Bob charge Alice for a key generation or raise a complaint if there are too many requests in a short period of time. Now that the model has been established, consider again the problem of discrete lock keys. The idea is now to have Alice extract two random strings from her original one. One that would be her partial secret key, XA prime, and the order that will be used to commit to the first part, to XA prime. The commitment scheme must here be extractable to be able to carry out the security proof. After Alice receives Bob's randomness, she extracts a secret key X from these sources of randomness. She then, and computes also then YSG3DX. She then proves to Bob that she extracted the secret key using Bob's randomness. And that the one she committed to in the first round, and sorry, she proves to Bob that she extracted the secret key from Bob's randomness and the one she committed to in the first round. And Bob can then verify the proof using the commitment from the first round. The caveat is here that deterministic extractors for all sources don't exist. So one must either use a random miracle or universal computational extractors in a plain model. On a positive side, this idea for discrete lock keys can be generalized to all key generation algorithms that can be modeled as probabilistic circuits with no other restriction. But it does not apply to factoring base keys, which are still what you use in practice. So the goal is now to construct an efficient protocol for RSA keys. The next standard for RSA key generation is, first choose a random two distinct large primes B and Q, compute N as PQ and phi of N, then choose E larger than 2 to 16, that is co-prime in phi of N, and then compute D as the inverse of E modulo phi of N. Set then the public key as any and the secret key as the factorization of N. But there is some ambiguity in this specification. The first is what is meant by large. Our interpretation is that there is a parameter B that fixes the length of P and Q. The second is that there is an algorithm prime test which runs a potentially randomized primary test algorithm in P and Q and also checks that E, which is usually fixed, so like two to 16 plus one, is co-prime with phi of N. The third is that P and Q may be required to satisfy additional properties, such as being safe primes or equal to three modulo four. The last one and an important one is that P and Q are really the first two primes that satisfy the required conditions, so that specific ones cannot have been selected as it was for the case with the worker attacks. And the duration of the public key can then not be biased. The protocol is as follows. First Alice extracts a randomness rr prime as before and the randomness rho a to commit to rr prime. She then sends the commitment to Bob and Bob replies with his randomness rb. Alice then extracts a seed s from her randomness and Bob's and runs a prf encounter mode on it until she finds the first two primes that satisfy the conditions. She then computes a proof that N is the product of two integers returned by the prf on seed s that were accepted by the prime test algorithm and are of the appropriate length. She also proves that the values that did not match the condition were returned by the prf on seed s. She then sends the modulus N, the exponent E, the index i of p, the values other than p and q, and the proof. Bob first verifies the proof, first verifies that the proof is correct, including the part that the other values sent by Alice were indeed returned by the prf on seed s. And of course that this seed was extracted using Bob's randomness analysis. And then also, he later verifies that none of the other values, a.k., passed the prime test algorithm. This is crucial to ensure that p and q were really the first to prime that Alice found that passed the test. And Bob can then be convinced that Alice or rather her computer did not bias the key generation process. As mentioned before, prime test may be randomized. And the proof also requires some randomness. But the necessary randomness is extracted from the original randomness. And these non-important details were omitted just to ease the presentation. The instantiation involves a group of public prime order p and pertinent schemes to commit to Alice's randomness. To extract a seed s, Alice hashes Bob's randomness and adds it to her's modulo suffixion amount prime l. This prime l must divide the order of g minus one. The reason is that the prf used to generate prms is the Dodis-Yampovsky prf in the group of quadratic residues modulo two l plus one and with a base a. Since p and q are generated in g, the order l of a must then divide order of g minus one as a generates a subgroup of z order of g star. Order of g must also be larger than two l plus one square to be sure that n is really the product of p and q as integers and that there was no modulo reduction. In the proof of correct computation, Alice commits to p and q with again pertinent scheme in g. And since she must prove that these committed values are the outputs of the prf on the seed s, she essentially has to prove that she knows a value x such that a public value y is equal to g to the a to the x or in other words, a double discrete logarithm. This problem was introduced by Stuttler in 1997 to build a very valuable secret sharing scheme. It was later used to build group signatures, electronic cache, credential systems, so it has a wider range of applications. The only method known so far to prove knowledge of double discrete logarithm is due to a combination of Stuttler and it has a communication complexity of the order of log of the group order because it uses zero one as challenges. Using bullet proofs for our administer grids, we managed to get a communication complexity of the order of log log of the group order. The difficulty is now to encode this problem as a circuit and make it suitable for bullet proofs. The method I'm about to present is different from the one in the paper as it's easier to explain. It's actually closely related to the one on, presented in paper on the Fentine Satisfiability Arguments also published at this inter-curved edition. The first step is to consider the binary decomposition of x. Then AX can be written as the product of AI values with AI either equal to one or A to the two to the i. This implies that y is equal to G to the product of the AIs so proving knowledge of a double discrete log of y amounts to proving knowledge of such AIs. To enforce that AX, the delog of y must be the product of the AIs, consider a polynomial over the integers AX minus the product of the AIs. It must then evaluate to zero. Now to impose that the AI must be, that AI must be either one or A to the two to the i, add those terms and raise every term to into sum to the power two since the sum of squares is zero if only if each of the each of the squares is. Now introduce variables bi with b zero as a zero and then add the AIs one by one and store the product in bi. To again impose these relations, introduce these additional terms in the polynomial. The last step is now to introduce variables ui and vi to avoid the cross terms in the product of ai minus one by ai minus a to the two to the i and embed these relations in the polynomial again. The polynomial is now in interesting form as the value AX is the one that is committed in y. There are also linear terms which appeared here since ui and vi were introduced. From the rest, one can infer a peragulations with for instance bi is equal to ai times bi minus one and the variables in blue can be interpreted as left inputs, the ones in red as right inputs and the green variables as the outputs of the multiplication gates of a circuit. And these inputs additionally satisfy linear constraints represented by matrices w, by w matrices. And these constraints guarantee consistency between any two depth levels of the circuit. Beside AX is committed in the public value y as mentioned before. From this polynomial equation over z one can then infer over z mod d order of g had a map product and linear consistency constraints and bullet proofs can then be used to argue knowledge of such inputs, a l, a r and the outputs a o. That's how we managed to exponentially reduce the communication complexity of proofs of double discrete logs by using bullet proofs. There are certain questions that remain open. The first is whether Bob's randomness could be used to amplify analysis instead of requiring in the model that either of them must have high entropy randomness. It's not clear whether it would be sufficient to have both of them to have access to moderately high entropy randomness and then use these two to amplify the randomness to generate the resulting key. The second problem is to give a model in which entropy is accumulated over time and is actually done in practice instead of assuming that it's providing in a single chunk. It would also be more realistic to have a model in which the randomness sources are not independent of the extractors as the randomness sources are in practice timing interrupts and the extractors has functions and these two are then obviously correlated and not independent. These problematics were already considered by Coretia L at crypto 2019 in the context of PRGs and it would be interesting to see if it applies in the context of practical key generation. That's it for this presentation. Thank you for your attention and please send us an email if you have any question.