 Hi, my name is Mary Mahler and today I am going to be giving a presentation on aggregatable distributed key generation. This is a joint work with Covey, Philip, Sarah, Gillard and Alan. And the idea is that we want to be able to generate keys in a way which is non-interactive and in a way where we are able to communicate as little information as possible over peer-to-peer channels. Our main contributions are fourfold. First, we have a new construction of a DKG. Second, we have a new construction of a fully structure preserving the UF, otherwise known as a unique signature scheme. And this signature scheme has been designed to work well with DKG. We have also presented some new DKG definitions. I don't know how many of you are familiar with the literature on DKG definitions, but generally speaking, you... they are a hard thing to define because you have to talk about how key generation might work, despite the fact that you don't necessarily know what scheme the key is going to be used in. And we also look into the proving techniques which are being used in DKGs, and we find a way that we are able to show that our DKG is secure, despite the fact that we only have one round of communication and that the output of our DKG is not completely random. So being able to use our new proving techniques in order to get security even in this setting means that we are able to come up with a more efficient construction. For those of you who don't know, a distributed key generation algorithm is a means to generate a public key in a way where nobody knows the secret key, so in a way it's a trusted setup. But still such that if you want to do some kind of operation such as encryption or signing, then as long as enough people are participating, then there is enough secret shared information between them that they are able to generate something which will verify with respect to the shared public key. So classic places where they used all the time is for threshold encryption and for threshold signatures. With threshold encryption, you want to decrypt something in a database, you maybe don't trust just one person to decrypt because maybe that one person could be corrupted and maybe the information is very important. So here you would say that we need at least two out of three people in order to be able to decrypt the information. It also comes up in, in verifiable elections all the time. Another situation where distributed key generation algorithms are used is in threshold signatures. So here we want to authorize an action of the action that you might want to authorize might, for example, be that we want to have some kind of key which is storing lots of crypto funds. In this situation, we maybe don't want that to trust our secret key just with one party because because that one party might then go and steal all the money, but maybe we're okay to trust a group of people say, but equally, we still want it to be the case that if we're not able to communicate with everybody whom we've interested shares of the secret key with we can still participate this is why T might be less than N. So there are alternative situation alternative ways in which you can generate threshold signatures largely just by having you need to have seven out of 10 signatures in order to sign would be a classic a situation where this really doesn't work is with randomness beacons. The reason here being that if we want to have a good source of randomness, then we really need there to be a fixed output, which is not known to any party in advance but any party can compute if enough parties can compute if enough of them can collaborate. So if you don't know the public key in advance, then, as in you don't know who's going to participate, then this would cause you a problem because the output would be different depending on what the public key. So, situations where we might want to randomness beacons is in consensus protocols, certainly this has come up in DFINITY, this has come up in Ethereum 2.0, this has been part of the League of Entropy, who produced the randomness beacons that we can ground have been studying for a long time. And it can solve problems in consensus protocols where you might have a rushing adversary that might play last, and by playing last, although they're able to not fully able to influence what the output of the hashes, maybe they can hash something 10, 20 times until they get the outcome which is most favorable to them, and then submit their output. And this is something which can have really bad effects on a consensus protocol where you might, for example, be electing the leader for the next round. If you have an rushing adversary that can affect the randomness, maybe they always make sure that one of their corrupted nodes is the leader in the next round, so they can prevent progress from happening. And that's just a good situation. The way in which one way, I should say, there are other ways, but one very effective way for building a random beacon is to have a unique threshold signature. So this is a signature where T out of N parties must compute the signature, but also such that you always have the same signature output on any given message. The classic algorithm that you would use for a unique threshold signature is BLS. In this work, we're not going to be able to cover BLS signatures, we need the output of ADKG to be a group element rather than a field element, so BLS will just not work. What we do provide is an alternative signature which will work with ADKG and also satisfies the uniqueness property, it's also has many of the same properties as BLS. We've got short signatures, we have that the signatures can be aggregated together fairly easily. In essence, it looks very similar to BLS. However, our secret key is a group element and therefore will work with ADKG. I've been talking about DKGs a lot as a means to have a shared secret which can then be used as part of a public key. Many of you might be thinking this sounds very similar to secret sharing, it is. In fact, every DKG that I know of is essentially a secret sharing scheme that's just with many participants such that you don't have a single trusted dealer. You have many trusted dealers and when you put them all together, you get an NBC such that no single person needs to be trusted. A classic secret sharing scheme that you would base your DKG on would be the Feldman VSS. We're not using the Feldman VSS, we are using the Scrape VSS and this was really the scheme that inspired our final result. The advantage about our scheme over many of the other schemes in the literature is that we have new complaints reports. And the DFINITY would refer to this property as being a non-interactive DKG. We're able to just have every participant generate the stuff which they want to be able to use in order to have, want to be able to use in the protocol, to encrypt all of the information which should be secret under the relevant secret keys and send that out, and then they are done. Nothing else happens. We are also able to do this fairly, well, very efficiently. The reason for that is going to be because we can aggregate our transcripts, which I will talk more about later. But because we can aggregate, rather than having to do lots of very heavy broadcast rounds, we're instead able to do lots of very lightweight gossip rounds and only broadcast the final results at the end, which essentially means we're able to have a non-interactive DKG, which rather than having your information sent over your peer-to-peer channels being n squared is instead n log squared n. So this will make a huge difference practically, we believe, if it gets implemented. And certainly theoretically you can already see that there is a breakthrough here. We have implemented the actual algorithms of our DKG, not the gossip protocol, but the dealing, the verification and we have that the transcript size is linear in the number of parties. The process of verification is linear, the dealing is linear, everything is linear. We think we could even improve these numbers further if we were to use schnaw signatures rather than BLS signatures, but the BLS signatures made our security proof a little bit easier. So it was a bit cheeky there and decided to go with the easy proof rather than the fastest scheme, which possibly, possibly implementers won't thank me for. The downsides of our scheme, I've mentioned this already, the secrets of that we're generating our group elements. Therefore, if we want to use anything that looks like alkymal encryption, we're only going to be able to. The reason we've done this is because we are going to be using something that looks a lot like alkymal encryption and alkymal encryption cannot encrypt filled elements efficiently. This does not affect our ability to apply our results to random beacons, it's still just as applicable to random beacons because we do have that unique threshold signature that will work with our scheme. We call our unique threshold signature a BUF, that's because it's a verifiable unique function, and in order to get the randomness at the end you're going to have to hash it. Our proof size is two group elements, BLS is one group element where price is big. Verifier computation is three pairings. So this is only a little bit more expensive than BLS, but I should also add a caveat being that when we want to derive our unique value at the end, we're also going to have to compute three more pairings. So the starting point for our BUF I would say was trying to have a BLS signature, but we're going to have that the output is in the target group, rather than in a source group. Our signature doesn't include any target group elements I should, I want to say very clearly now, but our unique element is going to be that target group elements. What we're going to do is we prove that we have the element will be well formed when you put the proofs together in the way that we have prescribed in the paper. I mentioned earlier that our DKG has no complaints around and that this is a really, really big benefit. I want to elaborate on that. So in a party with a complaint, in a DKG with a complaints round, which is how most classic DKGs in the literature work. You're going to have at least two rounds. And the first round, every party is going to generate a contribution to the public and secret keys. Then every party is going to broadcast their contribution to the public key, and such that all parties agree on what these contributions are going to be. And then here's where the fun starts, because there wasn't necessarily an efficient way to encrypt the secret key contributions over public channels. Instead, what they do is it's in the name of private channels. But the problem with this is if you send something over a private channel, then there's no way to prove that you didn't send it. You did send it, but the thing that you sent was wrong. Well, that you could prove, but if you didn't send anything at all, then say party two party one did not send anything to party two. There's no way that party two can convince party three of this beyond reasonable doubt. What we have to do is we have to engage in a complaints round party two is going to broadcast a complaint, but party three isn't going to initially believe them. Instead, what will happen is they're going to set broadcast a complaint, and then party one has a fixed amount of time in order to publish the secret key that was supposed to be sent to party two. And if they don't send it within that time, then party one gets eliminated. And there's all sorts of ways in which this becomes difficult to manage. And in order to make sure that it works, you're going to need a very long delay between your rounds. There's also a larger tap surface as quite a lot of things that can go wrong with this sort of communication structure. And another possibly less vital flaw, but still still causes a problem is that if you have a party that momentarily goes offline. If you receive their share from party one, even though it was sent, then that party can no longer participate in the protocol. They're essentially a dead party who will not be useful. The way we avoid this is we just encrypt everything over a public channel such that we can prove that the encryptions are correct. The party sends a bad encryption of the secret shares and everybody will know because it went past verification. And this, and when I say that everybody will know. I mean that everybody, everybody will know including parties who don't participate in the digital. So it's externally verifiable when we're encrypting every single message sent from every party to every party. The consequence of this is that we would get something that is quadratic sized so we'd end up needing to send n squared messages. In order to avoid having to send n squared messages, which is, is a lot. What we do is we use the fact that our DKG is aggregatable. So we have an encryption of all of the secrets which are sent from party one to all of the other parties. One, two and three and four and five. We can combine all of these transcripts into a single transcript. That is the same size as any, any of the individual five transcripts before we essentially compressing down that information encrypting every secret. That means that there are large messages. I said that previously. As I say, aggregated messages are the same size as a single message. So we're able to make sure that things increasingly get smaller. As we're spreading the bigger transcripts over the over the network. So this is broadcasted all of the transcripts this wouldn't help us really that what we're going to do is we're going to gradually send the transcripts and aggregate then send the transcript again and aggregate and aggregation supports this. It is okay to aggregate two, three, four, five, six times as many times as you need. It doesn't matter which order in which you aggregate the transcripts. So for example if you end up with a transcript which is the third and the fourth party which are pushed together. Then if you want to aggregate that with something with the first and the third party, even though the third party has already contributed, you can still compress those together and it won't cause any trouble at all. So this doesn't work for you. If you're in a situation where you can't use our aggregatable structure in order to then gossip the communication in order to make sure that everything is really small. Our techniques for proving our DKG is secure are also applicable to the Pedestrian DKG for certain applications, not every application, but it will work for BLS and it will work for Algamal. And we proved this in the paper. This proof for BLS was a gap in the literature before, despite the fact that people are actually using this scheme in practice. So we thought that was fairly important to provide. You will still need to do the complaints around, but at least if the complaints around goes well and you figured out a good way to do it, then it will be the case that the scheme itself is secure. Okay, best paper award went to the attack on this, by the way, is that with Schnor threshold signatures, you need to be fairly careful there are some fairly old results in the literature which have been broken to the point where you can break it in seconds if you have a concurrent adversary, probably less than that if your implementation isn't in Python. So the work just at the end of my talk is we have shown that some newer schemes in the literature, namely for us, is concurrently secure under the one more discrete logarithm assumption. This paper uses proven techniques which are very similar to the work to the techniques in this paper. In that paper there are also random oracles in play which are programmable. So it's not exactly the same, but I would say that it is inspired. But to conclude, ADKG is publicly verifiable it is aggregatable and it does not have complaints arounds. We can use gossip communication to reduce, we can use gossip to reduce communication, and we can build random beacons using ADKG and WF. These techniques are highly applicable to other schemes. Thank you very much.