 identification to signatures, tightly, a framework and generic transformations. The speaker is Bertram Potrin. Now, yes. Okay, I'm talking today about identification schemes, about signature schemes and how to build the one from the other. What I present today is joint work with Mia Bellari from UCSD and with Douglas DeBilla from McMaster University in Hamilton, Canada. So, I start with a brief introduction of what signature schemes is. I guess all of you know what this is and we have seen it a couple of times on this conference already. So this is just to fix the syntax. We have a signing algorithm and it outputs a signature which I denote with sigma. It gets a secret key, a signing key. It gets the verification algorithm is C on this side. It gets a verification key and it gets a signature and it will indicate with 0 or 1 whether a signature on M is considered valid or not. Everything depends on signature schemes. Nowadays, there's uncountable examples. For example, TLS, e-commerce and so on. And signature schemes are also widely standardized. So these two are, for example, RSA-based signatures and these are discrete lock-based signatures that appear in the one or the other standard. So this is actually a standard. If we focus at the letter four here, they have something in common, namely at least two of them come from the future mere transform of some identification scheme into a signature scheme. So this is a general principle for signatures to have a conversion from here to this side. The future mere transform was proposed 86 and it's quite versatile because it takes a lot, a big class of identification schemes and make signature schemes out of them. We have signatures based on factoring on RSA on discrete logarithm problem. It's all there. And this also holds for the very efficient ones that were standardized. So for example, these two are directly instantiations of this and just in a different elliptic, in different discrete logarithm groups. This one is grayed out because actually DSA and ECDSA are not really from the future mere transform but the same way of thinking behind this. What I put down here on the slide is a brief history of the security arguments for future mere transform when it was proposed in 86. There was just a hope for security. The security was not understood. This started with PS in 96, 95 about. And then this is the early 2000s where the reductions were made to the security properties of ID schemes. In a principle, we try to extend here to a fourth bullet point. What's this paper about in this setting? So the future mere transform is there but it has a big problem somewhat. Namely, the security reduction is inherently untied. So we always lose a factor, a huge factor of two to the 60 or something. So this is the number of random oracle queries that we allow. There's some exceptions though, if you have lossy identification schemes. But in general, we have untightness and this is due to the forking lemma or the reset lemma, whatever the paper used that you consulted, it's very similar. And untied reductions have, in principle, the problem that actually they blow up your key sizes and your signature sizes. If you, unless you disregard tightness completely but we don't do it for this talk at least. There's an exception though, there is a scheme or a conversion or more scheme that's called swap. This appears in a journal paper of Mikali and Raisin, also in the early 2000s and that one is factoring based. And it has a tight reduction. However, it does not follow from the future mere transform. So this is somewhat the setting that we started with and then we have the following three contributions. First, we extend the class of identification schemes that can be considered for this trend transformations by adding another functionality. This is these schemes with trap tours and there is a couple of instantiation for this. The second part is that we didn't take these class of identification schemes and we propose new transforms that will turn them into signature schemes. So this is four new transforms that we propose. Importantly, they are the reductions in all four cases are tight. And then the third point is we look, go back to the swap scheme and try to re-understand it then in that framework. Because so far, swap I consider ad hoc. So the first step is to look more into security of signature schemes. So far I gave you the syntax. Security definitions, we also, first of all, you know it. And second, we just saw it in the last talk. The most common security notion is existential unforgability on the chosen message attack, which we denote with just this UF. There you have the adversary that needs to forge on any message. And it has an oracle, a signature oracle that signs any message that it provides. Then there's a second notion. It's a more technical one that we need in this paper. Namely, this is we called unique unforgability. And it's very similar to standard unforgability. But there's one exception. Namely, this signature oracle can be queried at most once on each message. So the adversary can say, sign the string Alice. But it cannot say the second time, sign the string Alice for me. So that's the only difference. Of course, the second one is more restricted than the first one. So there's an implication from here to this one. One can also show that this implication is strict. The question is then, sorry, do we have transforms that bring the weak notion to the strong one? And I'm showing two transformations for this on just the next slide. Importantly, these transforms have tight reductions and because our overall goal is to go from identification schemes to signature schemes with UF security, then from now on, the goal will just be to go from identification to UUF. This will be sufficient because we can then just run one of these transforms afterwards. So how do these two transforms look like? In principle, both of them can be considered full glory because they appeared somewhere else, but not with a security analysis. And more importantly, not with checking whether the reductions were tied and so on, so I just repeat them here. So the two transforms go into take two very different paths to achieve the goal. The first one, which we call DR, works by removing randomness from the scheme. So this is the standard thing of de-randomizing the signing algorithm by deriving the randomness that it will use using a PRF from the message. And because our paper is full of random articles anyway, we actually write just, we use another random article, an independent one of the other one, and plug in the signing key of the scheme and the message, and then this is our PRF construction. An advantage of this transform is that the signatures stay the same by the format and also the verification because we just de-randomize the signing algorithms, the disadvantages that we need one more random article. The second technique is adding randomness and here we just add a salt to the message. So instead of signing a message, what we do is we pick a salt of 160 bits or something and we concatenate it to the message and assign the whole thing. This does not prevent absolutely that I never sign messages twice, but effectively it does because if the salt does not repeat, then M concatenate will always be different. This will also work. The disadvantage is that the signature will have the salt appended, so I need to spend another 160 bits. On the other hand, this is standard model secure, whatever, like pick your choice. I think this one is more interesting because the signatures are shorter. Importantly, in both cases, we have type reductions. So from now on, we use UUF as our main goal to achieve in our transforms from identification schemes. Identification schemes, what's that? This slide or a similar one, I've seen at least twice yesterday in two talks. This is the standard syntax of identification schemes in the decades. What we do have is we look at this three move identification scheme. There's one prover here on the left, there's a verifier on the right. The three messages are called commitment, challenge, and response, and we didn't note the commitment with uppercase Y, the challenge with C and the response with Z. The commitment is sampled using a commitment generation algorithm that we did note, CMT, and it will output the commitment itself that is made public then, and it will output some local state of the prover little Y, and this Y goes into the response algorithm that outputs the response, given also the challenge that is contributed by the verifier, and then this is the protocol how it is generated. This, the concatenation of Y, C, and Z, we call the transcript, and that one is verified by the verifier in the end. Okay, that's standard. What we want to look at in this talk is also about these identification schemes with straptors, so what's that? So look here that the commitment currently is generated, the commitment and the local state here, the secret state are jointly sampled. In a trapdoor commitment, in a trapdoor identification scheme, this is replaced. So I go back, this one algorithm is replaced by two, namely the first one is one just samples the commitment from the commitment space, so this is not an algorithm, this is just a set commitment space, and then, given some trapdoor with a special algorithm, one can compute this local state from the commitment. So only the prover can do it because only the prover has TK. So imagine SK and TK being the same thing, this design key, the identification secret key. The remaining parts stays the same here, and what I want from the trapdoor property is, well, correctness as before, but also that these values YY generated in this new method have the same distribution as the old ones. So that's an identification scheme with trapdoor. Now we need security notions for identification schemes. The ones that I propose here are independent of the trapdoor property or not. So the standard one, also known since a long time, is impersonation resilience. Basically, the adversary tries to impersonate the prover to an honest verifier, so the verifier remains honest, and in such an attack, the adversary gets public information, this would be the public key. So all the information that the verifier has, it gets a transcript oracle that simulates just the honest execution of this protocol, so it generates Y, C and Z, and gives this transcript to the adversary, and a challenge oracle. And in challenge oracle, this is the impersonation steps. On the challenge phase, the prover comes up with any commitment it likes and sends it to the honest verifier, which then samples uniformly it randomly on the challenge, sends it back, and it expects a good answer for that one that verifies correctly. The transcript oracle models passive attack, so the adversary first sits there and observes how people are communicating, and, well, then in the end, he or she will achieve the one. This has been formalized in this paper by Abdallah et al. The notion is called Impersonation Against Passive Adversaries. Importantly, in that paper, there is only one challenge query to be done, so that the adversary can only try once to convince the verifier. How can we build signatures from this notion? Well, we are the Fierce Immune Transform. This is also well known. The reduction is not tight. It loses a big factor. The question is now, if you lean back a bit and ask yourself, why is it not tight? Then a couple of reasons are there. The first one is a technical one, and the MPA notion is only the single challenge query, which is not helpful. A couple challenge queries, then the situation would be better here. The second is that the adversary has the free choice of the commitment. So, if you study the Falkenlemmer, for example, then you have this structure that the adversary forges, and then you rewind it, and then you run it a second time, but possibly it will fortune using a different commitment, and then you are in trouble, and this also is a root for untightness somewhat. The question that we want to ask ourselves here in this paper is, are there alternative security notions for identification schemes that would allow for tight reductions to signature schemes? We propose some. Actually, we propose four, and this is why we call this a framework of notions. We call this constrained impersonation because we constrained the adversary in some more or less artificial way. The four notions are called SIMP, CC, SIMP-CU, SIMP-UC, and SIMP-UU. So this is the four notions. This is identifiers for the four notions. In our setting, the adversary has, again, access to the public key. It has access to a transcript oracle, which it can ask multiple times to generate fresh transcripts, and it has access to a challenge oracle as before, but now the challenge oracle, what it does, depends on the exact type that you want to have here. So there's four different challenge oracles that I will tell you how they work in a second. The goal of the adversary is, again, forging a signature, forging a transcript, sorry, and, importantly, in this framework, multiple queries are allowed to both oracles. So in the MPA of before, there was only one for the challenge oracle. Now this is unlimited. So now I'll tell you what these combinations, CC, CU, UC, and EU, stand for. They stand for the restrictions that we pose on the commitment and on the challenge in the attack phase, so in the impersonation phase. So the first letter tells us about the commitment, and a C stands for chosen, chosen by the adversary, and a U stands for unchosen by the adversary. So, for example, if you look at the case CU, then this means that the commitment is chosen at will by the adversary, but the challenge, so the little C, the challenge is picked honestly at random. And this notion, since CU, is actually exactly like MPA as before, just that we have the unlimited number of challenge queries. The other notions are somewhat all new, basically. Now this was likely a bit to abstract. What does it mean to choose by the adversary or reused from an honest transcript? So I'll give you the game definitions here in this slide. This is the main body, the simple experiment. It's, as you expect, like key generation of the identification scheme. The adversary is invoked, it gets the public key, and it gets two oracles, the transcript oracle that's here. That's common for all the four notions. The transcript oracle generates a transcript, gives the transcript to the adversary. That's fine. And then there is the challenge oracle. And the CU notion, that's the one from before that is close to imp, has this challenge oracle here, and you see here that the adversary calls it, but it will specify a commitment. And then uniform challenge is picked, and then the oracle returns this partial transcript. So this Y value, this C value, and a place to be filled by the response that the adversary has to output. The adversary can query this many times, so there will be a lot of partial transcripts, and it wins. The adversary wins if it can answer any, if it can fill this gap for any of these partial transcripts. A second one I want to walk you through is the UC notion. In the UC notion, the adversary actually first has to invoke the transcript oracle a couple of times, and this will establish some Y values that are honestly sampled. And then when calling the channel oracle, it just specifies an identifier I here, a counter. This I will just point to one of these Ys that were established here, and then that one is used in this transcript that has to be filled. On the other end, so this means the U unshows and stands for the commitment. The challenge, however, this one is freely picked by the adversary. Well, and then we have the U and the CC case in the work as you expect. So here both of them are sampled honestly, and here both of them are provided by the adversary. Now we have four notions, and of course there's relations between them, and they are, as you might expect. So this one is the strongest one where the adversary picks everything by himself. This implies this, and it implies this, and the outer ones see you, and you see independently of each other, imply you you, and they are separated. They are uncomfortable. All the green arrows here, the implications mean strict implications. This structure, how they relate to each other will somehow be important for the upcoming slides. So I add this little diamond, which is just this figure rotated by 45 degrees. So this one will stay on the top right corner on all slides for now, so that we can refer to it. And now that we have this notion, we can start building signature schemas from it, and for each of these notions, I will give you one construction. The first one is building on CU. CU is the standard one that you know with this impersonation, where the prover is impersonated by the adversary, and the adversary picks any commitment of its choice. The scheme to build the signature, to transform, to build the signature scheme from this is the well-known Fiat-Chemier transform. So here you see it. This is standard. There's nothing new. This is the standard version in 1986, basically. What we see, however, is that in the forgery, this structure here, these lines of code already show that the adversary has three possibilities to pick the commitment. However, with the challenge, it's very limited, because it will have to be the output of the random oracle. This naturally relates to the CU notion, where the first, the commitment is chosen by the adversary and the other one is not. So this scheme now, because we have basically the multiple challenge queries, we now have this result, a security result, unforgeability, tightly reduces to SIMCU. The second scheme, yet this is not too interesting. So this improves on the prior result by giving the right notion for the identification scheme to get tight security, however, it's the same transform as before. Now it gives you something new. Now we switch to the UC setting. In the UC setting, the commitment was not chosen at free by the adversary, but the challenge was. So here's the signature scheme, and you see this is reflected. Now the commitment comes from a random oracle. So the adversary in a forgery cannot freely pick it. It has to coincide with the output of the random oracle. But the challenge it can pick as it likes. And for this scheme, now against this new notion here on the right, we have again tight security. Here we see an example where only this weaker notion, UUF, is implied. And naturally, like if you do the exercise, you will see that UF is not implied. There's a trivial attack against that. So that would be the second scheme here for that one. The third one is for the UU notion. That's the weakest one. We see now that the adversary has, sorry, that the scheme has now one random oracle for the commitment and the second independent random oracle for the challenge. So the adversary, when forging, can either control the one nor control the other. What you see also is that we have the cuts one bit or magic bit as I just learned it is called. This is very short signatures. Again, we get UUF with tight reductions from this notion here. And now we go for the upper notion here, CC. Well, you might wonder why is it actually, why isn't, why is there point to have something from CC anyway if this is the strongest notion in the first place because we can just use this or this or this transform? Well, the advantage is that the others, some of the others use the trapdoor property, which kind of is a restriction on the identification scheme while this one does not. So for this transform that comes from CC, we don't use this trapdoor property. We get longer signatures though. Again, we have tight reduction to the CC property. Well, and that's cool. We have now tight reductions for, so we have tight signature schemes from identification schemes. However, if you just believe it as I said it, then I successfully cheated you because I didn't tell you whether these notions for ID schemes are actually naturally achievable. And there I have this slide too about, so how do we, how can we achieve these notions for ID schemes? Well, we have this theorem first. If we look at ID schemes that are honest, verifies your knowledge and extractable. And this would be the standard case also in the future me transform. And we look at some security property of the ID scheme, which we call key recovery resilience. That means you give the adversary just a public key and it will output just a secret key and it doesn't have anything else, no oracle and so on. Then we get already UC for free. So this one we get for free and so we get this one. On this side for the CU, however, we lose a factor of QCH. And so overall the future me transform, if you start with, for example, DLP based zero knowledge proof, we lose exactly the same factor as the prior papers. What we are better with is this and this notion where we don't lose the tightness to the number theory assumptions, don't you? The fourth notion, CC, the one here on the top, recall this was the one where the adversary freely chose the commitment and the challenge. And this cannot be achieved in this world which is honest verifies your knowledge. Why? Because honest verifies your knowledge means that it's for everybody, it's possible to just simulate transcripts and so this would make it easy to forge one. This does not mean that CC cannot be reached in this setting, is that there is no identification scheme that reaches CC. I give you one here. Just what we know is that these identification schemes wouldn't be honest verifies your knowledge. So effectively I give you some scheme here that reaches MC trivially. However, I have to also say there is a limitation in the sense that these identification schemes, a scheme we constructed using a signature scheme. So actually then the overall construction would be take a signature scheme, build an ID scheme and you get a signature scheme, yeah. Cool. This is just a construction, this does not mean that any CC identification scheme needs a signature scheme by itself. So this might be replaced by something we can. In practice however, a standard example for a trapdoor ID scheme, so an ID scheme that also has a trapdoor and gives you these notions here is the old one from Guillieu Kiskater which recalls this commitment usually in Guillieu Kiskater, you compute, you sample a lowercase y value from that an and you raise it to the E. E is an RSA exponent. Well, and this is trapdoor because if you know D, you can also go the other way around. You sample this value first to compute this state by raising to the D. How much time do I have? One minute, okay. The last thing I wanted to talk and I'm not going to talk about is to recognize web in our constructions. So effectively, if we take the one transform that I proposed to turn an identification scheme into a signature scheme and then I use this technique transform to turn the signature scheme which was only UUF secure until UF, so this is just a composition of two conversions that I showed you. Swap is this and the only difference that you see here is that the salt value used for the second transform and the C value, the challenge used for the first transform is the same. So it's the same value sampled. Beyond that, this is the same thing so basically this is an optimization of this and I think this gives a very clear understanding of what the principles behind swap are. If you combine other transforms of us you get the same result but better. You get weaker security notions and you get smaller signatures so this is just one bit beyond that. So actually by combining the right things we get better than swap. And with this I conclude, thank you. Do we have a short comment or a question? Thanks for your talk. My question is about if you have a trapdoor commitment scheme like this, can't you get a tight signature from a hash and sign construction? Sorry? If you have a trapdoor function you can get a signature sign and you can get signatures from that. If you have a trapdoor permutation you can have tight security, tight signatures from that. But this wouldn't follow this identification, the pass through identification schemes. So this contribution is more about understanding the connections about ID schemes and signatures. This is the first place to build in this setting, swap in the factoring base setting a new signature scheme. Like we do but there is no point in it because other constructions are known since 20 years. Okay, so let's thank the speaker again. So the last paper in this session is how to obtain fully secure preserving automatic signatures from structure preserving ones and the speaker is Yu Yu Wang.