 Okay, so this is a paper with Ali Bagherzandi who couldn't make it here because maybe I should say that, you know, being Iranian, he cannot get a visa either to France or back to the United States without waiting like two or three months either way. So just so everybody knows what they have to deal with. Okay, so what are aggregate signatures? So this is a way to compact signatures when you have a scenario where many people have possibly many messages to sign and they perhaps follow some protocol, some interactive unfortunately protocol to get a short message. There is a functional equivalent of n separate signatures, first guy on first message, second person on the second message, etc. So there is a verification procedure and instead of having n separate signatures, it takes public keys and messages and the single string and verifies it. And what's the benefit? Well, signature size becomes compacted. Okay, there is a very close variant of this called multi signatures for some reason where instead of several messages, you have all these messages are the same. And with multi signatures, we also know not only to compact the signature but also to reduce the verification time to, you know, constant. Okay, I mean, in principle, maybe you could do this for when messages are different but there's no such scheme so far. And some applications of these, you know, they're basically hypothetical for now. It's kind of like, you know, nice protocol problems is kind of looking for some applications rather than the other way around. But potentially, if you have large number of signers and for some reason you have some bandwidth or storage constraints, so you want to compact the signatures. And where can it go? Like, well, for example, when you have some acknowledgments to a massive broadcast, so it's single message and everybody signs it. But on the other hand, maybe it's not so easy to think like, why do we have bandwidth constraints if we broadcast things? Well, there's no exotic applications for geography. If you compact it to your signature, maybe it's easier to hide in some random, you know, among some random image. And the less exotic one perhaps most attractive sense of networks when, you know, you have a lot of signers and possibly they're kind of cheap and they try not to use too much bandwidth and storage. Okay, now why identity-based version of the same thing? Well, it's a further reduction of bandwidth. So, before I needed the public keys to verify and public keys can be, you know, take like 160 bits maybe and certificates even, you know, additional factor. And if I replace them with identities, maybe this becomes much lower, right? Depending on the application. Okay, so that's the whole point of looking on identity-based version of the stuff as we know, you know, notion of identity-based signature is not very well defined. Every signature is identity-based because I can just put identities in certificates. The point is that the whole public key certificate overhead is now very tiny. Okay, it's just an identity. Okay, so what was known about these things and where do we fit in? There were two schemes, one by Gentry Ransan in reducing elliptic curves and then Delareneven under RSA. I'm listing multi-signature case separately from the aggregate, general aggregate case. Aggregate is in square brackets. So, the Barlinar map thing is shortest. They're all comparable to the underlying identity-based non-aggregate signature. They only have like 80 more bits. Ours has one more factor of 80 plus a little extra. And basically our benefit is that we reduce the round complexity of the previous RSA solution. And clearly the elliptic curve one, you know, is way better in terms of round complexity, but for the aggregate version, they still need something funny called like a synchronized unique token. So every time they sign, they need to assume that everybody shares this token and this probably will, okay, at least in some applications, maybe creation of such a token essentially means one more round of interaction. So for those applications, it's really two rounds in both cases. Okay, and a different assumption. Oh, and a possibly faster verification time. Okay, so in the remaining of the talk, it would be basically the rest would be more technical and I will explain like, so basically the multi-signatures and aggregate signatures, they come from aggregatable versions of the same zero-knowledge proofs that underlie the signatures themselves. Okay, so we'll look at these kind of such amir-like signatures that were appeared in the previous talk already. Okay, and to make this aggregatable zero-knowledge proofs that form multi-signatures to get them with good exact security, we need these aggregatable zero-knowledge proofs to have straight line simulation. So I will explain that and I will explain the two tools that we use to achieve this type of aggregatable straight line simulatable zero-knowledge proofs. And basically we don't get full zero-knowledge, we get something we call structured instant zero-knowledge, which basically means that you can simulate but only on certain kind of instances. And in order to get that, we use equivocable commitment, we actually constructed an equivocable commitment under some restrictions that are nevertheless good enough to simulate these zero-knowledge proofs. Okay, so that would be the plan of the technical part. Okay, so let's see how these, you know, where do aggregatable zero-knowledge proofs come from. Okay, so let's look at the proof of possession of an E-th root, right? This is the source of the RSA-based signature by Julio and Quiscater. So the prover has an E-th root of the public key, which is the hash of his identity. This is in random oracle model. What he does, he picks a random group element, applies the RSA function to it, so then getting a challenge replies with a linear combination of the preimage of this public value and the preimage of, which is like a temporary value created in the proof and a permanent public value. And now if you compute the function on this point, you're going to get a combination of the temporary value and the permanent value shifted by this random challenge. And this is a proof of knowledge. If prover makes it, creates these kind of responses on random challenges, you can extract the underlying root, E-th root of this point. And how does simulation work? Okay, you pick a random Z in a group and for a random challenge, you can create this point because just by switching to the other side, right, this is computable sort of in the forward direction given the response and the challenge. And say if this is a non-interactive version of this kind of proof which forms the basis of the Julio-Quiscater signature, all you need to do is to embed this challenge as a response of the hash function on the point made of the public key and this first message. Okay? So the full signature is this thing and the response to the challenge computed by the hash function. Okay. And now why can we aggregate it? Well, because if you write, if many provers perform this kind of proof in parallel and somehow the challenge is the same on all these proofs, then by the homomorphic properties of these arithmetic, if you multiply all the first messages and multiply all the keys and let's say that you create the challenge by hashing them in this way and then you multiply the responses and then you can see that if you multiply all these verification equations, this verification equation will be met and the soundness argument is the same as before. Okay, so I'm skipping the soundness argument basically. But it's not very complicated. So how would we simulate this? Okay. Well, maybe you... Okay, so let's say that you present one honest party and everybody else is an adversary and so the simulator needs to perform the protocol on behalf of this one honest party. This is party P1. So let's try to follow the same procedure. He picks the response of this party, Z1. He picks this challenge. Okay. He computes the first message of this party just like before in the single proof case. But now, embedding this challenge into the proper hash function query is not clear because both aid the first message in these proofs and really the statement under which all of these proofs are happening, which is the product of the public keys, they are not up to the simulator to decide on. So if the first player announces his A1 and his YI, all the others are created perhaps by malicious players in any fashion they want. So it seems like the malicious players control these resulting values. And so the question is, how can the simulator know which of these hash queries to embed the challenge into? Well, so one kind of approach is to guess. You have in Serrano-Morocco models there is a limited number of hash queries that an efficient adversary can make. So if you guess, you will rewind for every time you guess wrong and this gives you at least this many rewindings per signature protocol instance. And this idea works, but there is a heavy security degradation. It's at least this many rewinding per instance and maybe actually more. And the scheme wouldn't be concurrently secure because you cannot rewind in parallel concurrently many executed instances of such a scheme. And since it's a distributed scheme it doesn't seem nice if they have to agree on what instance they are executing. You couldn't pipeline this thing. So it doesn't seem satisfactory. Okay. So Bellarin even what they did in order to get this with good security and with concurrent security, they say, okay, why don't we add one more round where every player commits to these values using equivocable commitments implemented in ROM since we are in ROM already anyway. Okay. So this is an equivocable commitment in ROM because these are random values and so a simulator can publish like a random hash, random, you know, strings and then extract the AIs from the adversary, compute this resulting product the way he likes, compute for the challenge, the response and compute his own contribution on the basis of Dutch. So the simulation works fine but we have three rounds. Okay. So what we propose is to replace this equivocable commitment that requires one more round by modifying the proof system so that you combine equivocable commitments into this first round of the proof. Okay. So the result is that well, if it was a free-standing proof system it would be a three-round proof and it's actually the same technique as used by Damgard to compile honest verifier zero knowledge to a general zero knowledge using equivocable commitments. It's just that what we need here, this is special, is that we need these commitments to be not only equivocable, but they're so homomorphic because if everybody, you know, submits this first message using an equivocable commitment we still want to multiply all these commitments to get a commitment on the product of the AIs and similarly with the other commitments we want to somehow, and here, you know, fast forwarding it will turn out to be a summation in case of our construction that there are some operations that we need to implement and form a decommitment to the combined commitment. Okay. So just like in Damgard's compilation what Damgard gets using equivocable commitments in this fashion is not only zero knowledge, it's also concurrent zero knowledge and particularly because there's a straight line simulator for it. Okay. And the resulting protocol has good exact security and it's concurrently secure. There is one, actually, there's an alternative construction that just uses witness in the usual proofs which are also straight line simulatable and would get you the same exact security. The only disadvantage of this approach is that as far as we could tell, witness in the usual version of the ETH route problem it seems the resulting construction has longer signatures and verification time than the one we have here. Okay. Okay. So now, what's there to do? Okay, so because this is equivocable only in quotation marks. We don't exactly get equivocable commitments and that's what I'm going to explain next. How much time do I have? Okay. Okay. So, okay. So in order to see what kind of... Okay. The reason we get... We only... We have to do restrict equivocability is because we unfortunately don't know of any fully equivocable commitment scheme that would be homomorphic. Multiplicability would be homomorphic. Okay. If we knew that, we would just use it and that would be the end of the story right here. Okay. But we don't know of such construction and we didn't construct it ourselves. Instead, we get a homomorphic commitment with restricted equivocability. Here is the nature of that restriction. So if you look at this proof system and how they are simulated, from the simulator's point of view, the first message has this form. Okay. So the simulator picks a random element in the group. This is the final response. Let's say random challenge and forms this first message. The first message that we'll have to reveal will be of this form. Okay. So if you transmit this message under commitment, this is the type of message that you will eventually have to equivocate to. Okay. You will have to open this trapdoor creator. If you are a simulator, you will create this commitment using some trapdoor procedure. But this is the form of the message that you will be equivocating on. Okay. So, okay. So in other words, we only have to open messages that are of this form. And how are we doing it? In the absence of having generally equivocable commitments? Well, we are able to open such messages because we embed the public key of the challenge into the trapdoor key of the commitment scheme. Okay. But this is a slight problem because that would mean that simulation would work only for a single statement Y. Okay. Because simulator knowing that he has to simulate for, say, first player, he sees his public key, he embeds his key into the trapdoor even into the commitment scheme, and then he can simulate proofs on behalf of this first player. But this is not good enough for identity-based signatures. And why? Because in identity-based signatures, all public keys or honest players are related. So the simulator in the identity-based signature scheme has to play on behalf of all honest people and cannot pick their keys in an independent fashion and just run the simulation procedure sort of independently for every player. He has to have one simulation that succeeds on all uncorrupted players in parallel. Okay. So that's what we call... But we don't know how to simulate this for every public key. We will simulate it nevertheless on a special type of public keys we call structured instance. Okay. Here is how to form... Simulator giving some challenge to the RSA problem, East Roots problem, forms the public keys of all players by shifting the challenge by random vectors where F is the RSA function. So this is delta to the E. Now, if these are the public keys of all honest players, the forgery under any of these keys really implies forgery under this challenge. Okay. So that's why this forgery argument goes through. Now, what about simulation? So this means we call it structured instance if the simulator manages to simulate the proofs under all public keys formed in this fashion. Okay. So there are four equivocation. Now, this is the message that you need to equivocate for the y's that are of this type. So this is technically the form of the message that you will need to equivocate on. And here is a commitment scheme that achieves this. That allows such equivocation. K is another group element. The committed message is exponentiated to another prime. And the randomness in the commitment, the decommitment is sort of curiously chosen in this group. Okay. And here, technically binding holds as long as the decommitment is shorter than this prime. And it's not easy to see. And here is the equivocation procedure. We embed this challenge into the commitment key. And we form, for every instance of the signature protocol, we form the commitment to the first message. So this is the chapter commitment procedure by taking this challenge and picking yet another long enough integer here. And, okay. So this, when you plug in the numbers, this is a message of this type. What's important is that e prime on both sides cancel. And this is satisfied as long as you find some decommitment and some response which satisfy this equation. And you can do that if you make this power a multiple of e. So in other words, this integer is a multiple of e. So you compute d in this fashion to assure this equation. And once this equation is met, the response can be computed in this way. And if you plug this d in here, what you'll see is that this is now a power of e. And because of this equation, there is inequality here. Okay. So that's the kind of non-intuitive thing. In the construction, why is this decommitment chosen in this subgroup? Well, it's because this equivocation procedure, you're computing the decommitment, the modulo e, for sort of long enough random strings, s and r. So in the equivocation, d will be random mod e. So therefore in the commitment, d is random mod e. Okay. And because we want to aggregate these things and we don't know the order of the group, so if you remember, we need to aggregate both the commitments and the decommitments. But the decommitments are just short integers. So this is going to be summation over integers. And because the binding works only if this decommitment is shorter than e prime, we need the... We need... And these are random numbers. These g's are random numbers mod e. We need their sum to be still smaller than e prime, where l is the maximum amount of players in this multi-signature scheme. So this is the maximum amount of these decommitments that you're going to add over integers. And that was a question one reviewer asked, why is our e prime not the same as this e of the... of the... signature scheme? And this is the answer. Okay. So I conclude. So the contribution is the two round aggregated signatures under RSA. And we have the structured instance aggregatable proof construction as an alternative to witnessing system crushability. Maybe there are some other massive multi-party protocols where it would be a good idea to aggregate proof systems in this fashion. And some open questions relating to aggregate signatures. Well, can we have aggregate signatures of message recovery? Neven proposed such schemes. So can we reduce bandwidth difference further by putting message bits into that aggregated signature? Okay. That would be relevant if the messages are very, very small. And, you know, and you were like really somehow need to reduce bandwidth. Another thing is that we still don't have one round aggregate signatures under any assumption even in random oracles. Okay. And we don't have any aggregate signatures without random oracles. I mean any efficient ones. And the kind of maybe interesting application question is combining aggregation with forward security appears non-trivial. And that security... after talking to some security person, you know, I realized that this might be an interesting problem for sensor networks. You have sensors that keep on signing some messages, some, you know, measurements that they take every 10 minutes, and update their keys so that if somebody corrupts the sensor, they cannot backdate these signatures, right? The key got forwarded. But in this application, rather than having a massive amount of signatures every 10 minutes, the sensor creates a signature, why don't they aggregate them at the same time? We don't seem to have a construction like this, so this appears to be a kind of maybe cool open question. Thanks.