 Hi, my name is Wei Dai and today we'll be talking about chain reductions from multi signatures and the HBMS scheme And this is joint work with Mihir Bilare. So what are multi signatures? Multi signatures allow a collection of signers to endorse on a common message with a short signature. And we'll require that each signer to be able to generate their keys independently by writing a key generation algorithm. Signing is a multi-round protocol between any number of N parties. They take input, the message to be signed, as well as a vector of public keys denoting the participants. At the end, they should return a short signature sigma. By short, we mean that the length of the signature should be independent of the number of signing parties N. Verification takes a list of public keys and a message and a ledger signature to return a Boolean value. Key aggregation is an optional feature for multi signature schemes. And we say that a scheme supports key aggregation if there are two additional algorithms. First, key aggregation, which takes in a list of public keys to return one public key. That's what we call aggregate public key. And we require that the list of aggregated public key to be about the same length as regular public key. Now, verification instead takes in aggregate public key, message and a signature to return a Boolean value. To replicate the same syntax as standard verification, we define the following aggregation algorithm to be simply the composition of key aggregation and verification. For security, we ask for MSUFCMS security, which is encapsulated in the following game. Here, the adversary obtains from the initialization work of the game a random public key, which we call the target public key. And to win the game, it must supply a forgery signature that must be valid for the list of public keys, which contains the target public key. To help us exceed, the game exposes an interface using which adversary can interact with our designer arbitrarily. Imperable. The adversary is able to control all keys besides the target public key. And we define the advantage of the adversary against the multi signature scheme to be simply the probability that the adversary breaks the following game. And here's a brief timeline of development of multi signatures. It was first suggested almost 40 years ago now. And earlier constructions are so susceptible to low key attacks. Or in other words, they're not secure against the notion that we just described. And there are two approaches to prevent low key attacks. First requires interactive key generation, which again does not satisfy the notion that we are considering. And the second approach is to assume the knowledge of secret key assumption, which requires more complicated key generation algorithms. And one canonical scheme that achieves the security work we discussed in the previous slides is a discrete log based construction, which we refer to as polarity dynamic construction. And it serves as a basis for the more recent multi signature schemes that people have looked at. Due to applications in blockchain settings, in particular two settings of multi signature wallets and in the consensus setting, short certificate of finalized states. And following the most more recent developments, Bitcoin has actually recently adopted short signature. And there's a couple of features of interest for multi signature schemes for applications in blockchain. We like to obtain efficient schemes over common input curve groups without pairings, meaning the site P256K1 curve or curve 25519, for example. We like to ask for MSUF secure schemes in the plain public key model. And we like two additional features. We would like our schemes to support key aggregation. And an example of this is music. And we would like our schemes to support two round signing. So standard signatures such as BN and music have to map through on signing. And there's more recent development to push schemes to have more efficient signing protocols that involves only two rounds of interaction. However, the problem with trying to achieve all the above four features, especially the third and fourth one, is that they are in eye contention with concrete and provable security. And here's what we mean. So what happens when we look at concrete security? Well, on one hand, in practice, practitioners expect schemes with provable security to be able to instantiate it in any practical scheme, meaning they will implement it on top of a curve that is of bit length 256 bits. And we should expect some discrete log like 128 bit security. So this is the expectation practice. However, the picture is not as nice in theory. On one hand, we have standard model proofs using working lemmas incurring some reduction loss. And on the other hand, we have idealized assumptions which does not model all the possible attacks. So for example, for three or more types of schemes, on one hand, we may expect 128 bit security when we instantiate BN and music. On the other hand, the work that proposes the schemes, the reduction that was given do not actually provide any security guarantees in these groups of bit length 256. Moreover, when we move to two round multi-signature schemes with key aggregation, then the picture is a lot less optimistic. And most of the works aim to provide standard model reductions when possible. And as a result, the reductions does not give any practical guarantee when we instantiate the schemes with 256 bit groups. And the last two rules here are results derived in AGM because there's actually no standard model reductions that can be given for the schemes in question. So therefore, these work resorts to AGM to proof security. However, even in idealized models, they were not able to prove tie security. So here's our contribution in analogy. And here's a DL tree on top of which are some fruits. The previous results acclaimed things of the following form. They said, hey, look, we have some nice fruits somehow hanging on the DL tree. And somehow refers to the fact that we have reductions from DL to the security of these schemes. However, these reductions are not tight. So they are somehow high on the tree. Whereas all results says the following. We say that, hey, these fruits are probably attached to probably solid branches. So these fruits are attached to branches that are probably solid. So there's two points here. One is that these schemes are attached to the branches, meaning the reduction between the scheme and the branches are tight both ways. And that these branches are probably solid means that they also are... We give both type of reductions, both from standard model, they are not tight, as well as tight AGM proofs for the hardness of these intermediate problems. And finally, we're able to add another scheme to the picture, HBMS, which is two rounds. It's a Portuguese irrigation and has the same efficiency as music. And the previous proposed multi-single schemes. So let's go more into detail of our contribution. First, for BN music, we provide standard model guarantees as well as AGM guarantees to instantiate both schemes in practice on 286 bit curves. And our results relying on assuming the hardness of IDL and XIDL in the standard model. Our new scheme, HBMS, is the first scheme that has tight security in AGM, as well as amidst secure reductions in the standard model from DL. It's two rounds, supports a key irrigation, and is as efficient as previous proposed music, too, with DWMS schemes. And there's more we can say from our results. Using our results, we can infer security between schemes, meaning we can say that if BN is secure, then so is music. Or if music is secure, then so is HBMS. We can make these claims because we make chain reductions, which are reductions of the form. We start from the hardness of the most commonly accepted hard problem, the squirrel algorithm, all the way to security of the scheme in question. In the middle is an intermediate assumption, which we do know here as X. We require that the first part of the reduction chain to either be now tight by any standard model, or be in the AGM, but tight. So we sacrifice either working in the standard model, or either trying to obtain tight reductions. And the second part of reduction, from the intermediate hard problem to the scheme, we make no sacrifices, meaning we provide tight and standard model reductions from the intermediate hard problem to the security of the scheme. And by standard model here, a bit of a disclaimer, we are referring to the programmable retomoral model, as opposed to, so by standard model we mean we do not make any idealized group assumptions. And moreover, we'll like the intermediate hard problem X to be reusable for different schemes. So here's the picture of our results. We start from considering the identification logarithm assumption, which was proposed by Kiltz, Mastin and Penn, to be as hard as DL in the algebraic group model, which was hardly also be proved from DL using one piece of the forking lemma. We showed that ideal is equivalent to BN, meaning ideal is as hard as breaking the eligibility of BN multi-same structures. And to obtain a similar result from music, we have to consider a new intermediate hard problem, which we call retom target and identification logarithm. And from there, we're able to show equivalent harness between it and music. Similar to ideal, we showed that X ideal is hard assuming DL in AGM or hard assuming ideal in standard model via one application of the forking lemma. Finally, we have a new scheme, HBMS, which we'll prove from X ideal in the standard model. With reduction loss, that's the number of setting queries, which in practice can be assumed to be a lot... In practice, it's generally a lot smaller than the running time with the number of ordinal vocal queries. And additionally, our scheme can be proved tightly secure from assuming a hard as of DL in AGM. So for the rest of the talk, we'll be looking at components of the DL tree. First up, the B-multi signatures can be seen as an extension to short signatures to the multi-signature setting. Key generation simply samples a discrete log pair. Verification equation is as follows. On the right hand side, we raise each public key to an exponent derived using the hash function. To generate signatures collectively, each signer will run the following protocol. First, they will sample random Google elements, which is committed to, and only the commitment is revealed across the wire to all the other signers. After receiving the commitment, each signer will open their commitment to the random Google element, R. The R's are then aggregated, but before that, each signer must check that the openings are correct, and they will abort if not. After the aggregation of the values of R, each signer will derive their challenge and response as shown. If M1 is doing this correctly, then at the end, each signer will obtain the correct multi-signature. The first rounds can be seen as a collective component protocol to generate random Google elements, R. And attacks are possible if the first round is not done. For security, BN shows the following reduction from DL. For us, we want to prove tighter results, and by observing that the security of BN is more or less related to the security of Schnur. To do that, we will use identification log-rhythm problem, first coined by Kills-Massey and Pan. Its context is following. Schnur's signature has been over 30 years, without attacks, at least on user-curve groups, or tight groups from DL. Additionally, tight groups from DL are impossible according to results that we know. Therefore, perhaps we should encapsulate the underlying hard problem of Schnur's signatures in the problem of his own. This is a proposal by Kills-Massey and Pan. In particular, we capture the exact hardness to break Schnur identification scheme. And it is encapsulated in the game as follows. The adversary receives a random group element X, and gets to query a challenge protocol up to Q number of times, each time supplying a group element R of the choosing, and the game in response chooses a random element C from ZP. And together with R, this defines a group element R times X to the C, against which the adversary needs to compute a discrete log. And it succeeds if it is able to compute this discrete log Z against any number of the Q sessions. And given such results, we know reductions of the following form. What Kills-Massey and Pan did is that if we parameterized the ideal problem, then we're able to show the security of Schnur tightly from it. And moreover, the link between DL and IDL can be proved both in the same model and tightly in AGM, and this will replicate for BN signatures. In particular, we show that BN signatures are tightly enforceable, assuming a hardness of IDL by concluding theorems of the following forms. So now let's move on to the harder parts, which is how do we argue security for music, which is BN plus key aggregation. How does music work? Key generation is exactly the same as BN. And key aggregation aggregates a list of public keys into a single group element by simply raising each public key to an exponent derived using the hash function. Key verification now is exactly the verification for Schnur signature, meaning on the right-hand side we have R times the aggregate public key raised to the exponent derived using the hash function. To stimulate a signature, the protocol is almost identical to Schnur, besides for two places. One, the challenge is derived using the aggregate public key. It's derived in the same way for every signer. And second, we need to add in the key aggregation exponent in derivation of Z. However, for proofs, we no longer... There's one additional complication of adding an additional forking lemma if we were to prove it assuming a hardness of only IDL. And indeed, if we plug in numbers, this decreases the basic security by a significant amount. And our solution is a new problem, which we named read and target IDL. It's extension to IDL in that adversary now has access to two oracles, new target and challenge. Each time the adversary queries new target oracle with some group element S, there's a new target that's generated by the gain, which is exactly S times X raised to some random exponent E. And now the challenge oracle additionally takes in an index indicating which IDL session or against which target the adversary is trying to break IDL. And at the end, the adversary wins if it wins IDL against any number of the previous Q1 sessions. So the claim is that if we encapsulate this in the assumption, namely IDL, then we can show the security of the music from it, tightly. And here's how the reduction works. Signing can be similar exactly as in BN, so we'll not show it here. The crux of the reduction comes from programming of the random oracles. For every query into the random oracle, in deriving the key average exponent, we'll do the following. We'll first compute the aggregator public key, but without the target public being inside the expression. And we'll forward this value S to the new target oracle. The response is then forwarded back to the adversary. The reason for this is that we want the aggregator public key to the point of view of the adversary to be equal to the target that is kept tracked in the XID orcane. So T is equal to APK. And for each query into H1, we'll simply use the aggregator public key to find out the corresponding session that we're in and simply forward the R to the challenge oracle. And the response to C is forwarded back to the adversary. So if everything is done correctly, then a forgery should allow us to pin down exactly one session for us to respond with the second component of the signature, which is Z. And this reduction works because we have programmed the S so that the aggregator public keys to the point of view of the adversary is exactly equal to the targets from the point of view of the reduction. So therefore, when we forward the Rs, the forgeries will then exactly correspond to the Z that we need to produce. So therefore, we're able to conclude that in a standard model if we assume the XID is always hard, then music is also hard to forge. And furthermore, we show that if you have an XID orcane adversary, then we can break or forge signatures for music. And it's very straightforward. We simply construct a forger that simulates another signer using the responses provided by an XID orcane adversary and as well as programming the written orcane points. Therefore, music is as hard to forge as it is to break XID as a hardest assumption. However, why do we believe XID is always hard? Well, we apply the exact same principles as how XID always is argued to be hard. Namely, we argue in the standard model using the Frick Lama, we're tightly in AGM. Recall that AGM says in the interaction between an adversary and a game, an adversary is algebraic if its output group elements can be represented in terms of its input group elements. More in particular, for each output group element, the adversary also provides a set of scalars K1 to Kn so that if you raise XI to the KI and multiply them together, then you get YJ. Assuming you have an algebraic XID adversary, it's easy to show that we can make a reduction from the L. Because XID or adversaries only receive two group elements from the game, G and the random group element X. And it provides the S group elements and the R group elements. So these group elements must be represented in terms of G and X if the adversaries algebraic. Therefore, in the final equation that the adversary must satisfy, it's only an equation in terms of G and X. So therefore, we're able to solve for the discrete log of big X. All about with some bad probability that this equation is determined, which will come down. And in the standard model, we rely on usage of Frick Lama, which relates the success probability of some algorithm taking random Q points for some set C into a forked version of the algorithm which is run twice with included inputs. The crux of using the Frick Lama is to specify what the algorithm A is. And for us, you'll be at the following form. We want to construct a reduction from XIDL to IDL. But there's a mixed match of oracles. XIDL expects two oracles, new target and challenge, and our adversary IDL adversary only have the challenge oracle. So therefore, we hook up the new target oracle against the challenge oracle given to our reduction, but simulate the challenge oracle for the XIDL adversary. In the following way, we're able to define the forking algorithm to simply take inputs, the simulations of the challenge oracle that it expects. And if we run the forking algorithm on this algorithm A, then the intersection of that means that we're able to compute the response for the corresponding IDL session. And this requires the reduction adversary to cleverly keep track of the oracle values and use the match response in the second run of the XIDL adversary. So therefore, we're able to also conclude that XIDL is hard in standard model assuming hardness of IDL with the score of Q2 as the reduction loss. And so finally, let us look at the HPMS scheme, which is two rounds and supports key aggregation, which can be shown to be secure in standard model from hardness of XIDL. HPMS works as follows. Key aggregation is exactly the same as music. Key aggregation as well. The verification equation changes on the left-hand side, where we've added in a power of a group element from a hash function applied to the list of signers and the message. And it's raised to the exponent that is encoded in the signature. The claim is that if we change the verification equation to the following form, then we're able to derive two-round signing protocols. In particular, in the first round, we sample two random scatters instead. And the intuition here is that we have done a Peterson commitment to G through R1. And we'll send this new value of the Peterson commitment to all the other signers. And now in the second round, we can simply compute the response Z exactly as before in music and now send additionally the openings to the Peterson commitment, which is randomness R1. After we get back all the responses, we'll simply aggregate the S and the Z and send them to be the components of the signature. So the only thing that I've changed here is that we have hashed the list of public keys and the message to a group element and changed the first rounds into a Peterson commitment and eliminated the two rounds of... condensed the first two rounds of point-for-point into one. But in result, the signatures are slightly longer and verification equations have not changed. And we're able to prove security of HBMS from XIDL and the crux of this is how we program the random worker points. The verification equation now has this additional term that relates to the output of the random worker and it's crucially important for us to be able to both assimilate signatures and construct a reduction. And it turns out that this crucially depends on how we program the random worker at this point. This one option, on one hand, if we program to a power of G, then we can, similar to before, turn forgery into a brief XIDL. However, we can no longer simulate the unassigner. If we send it to be a power of X, then we can do exactly the opposite. But we need to decide on this before signing interaction starts. So therefore, we employ the Courant trick, which guesses which options to program with the parameter that we later optimize, resulting in a reduction loss of 1 over Q sign. And moreover, we can show that this reduction loss is eliminated if we work directly in AGM.