 Today, I'll be talking about tight security for signature schemes, but I'll be focusing mostly on signature schemes built from identification schemes. And this is joint work with Pialom Fouk, Vadim Yubashevsky and Medi Tibushi. The remote, which one is the, sorry, for the technical problem? No, but to change this lens. Oh, it's this one. That gives you an idea of what the talk will be, but hopefully it's going to be better than that. So, since I've probably already heard this so many times today, but yeah, one more time, just to recap. So this talk is about signature scheme, so in which we imagine there is a key generation algorithm that generates the secret and private key, the secret key for the sign and the public key for the, that can be used for verification. And the goal is that, like, for this type of schemes, what is the standard notion of security? And here we'll be talking about strong existential affordability under chosen message attacks. And it's this standard notion in which, given a public key for the scheme, the adversary can query an oracle to obtain valid signatures for messages of its choice. And eventually, it has to come up with a new pair of message and signature, which he did not obtain through the oracle. And we consider the scheme secure, if the probability of such event happening is negligible. So, since this talk will be focusing mostly on random oracle-based constructions, like for random oracle constructions, too, the most common methods for building signature schemes is first full domain hash, in which we imagine that we have a trapdoor one-way permutation and a random oracle, and the signature is simply the inverse of computed on the hash of the message. And the other type is building through identification schemes, in which we imagine that we start with a secure identification scheme, and we make it non-interactive with the help of a random oracle. So, in this talk, we'll be considering mostly what is a typical three-move identification scheme, to which we refer as canonical, in which the prover first will send a commitment and will obtain a challenge from the verifier, and the prover has to come up with a response. And the decision of the verifier, I'm actually getting confused with the buttons, it's going to be a deterministic function of the transcript of the conversation. And the typical way of converting such identification schemes into signature schemes is to use the Fiat-Chemille transform, in which we simply compute the challenge as the hash of the message and the commitment. So, it's a pretty standard way of building it, but what about tightness of such constructions? So, here we'll only be talking about constructions, so no non-uniformity reductions. So, no, there is no controversy here, hopefully. So, in these schemes, what do we mean by tightness? As Alfred mentioned earlier today, usually reduction schemes are constantly tightly secure. If the probability to break the scheme, and it's close to that of breaking down the underlying assumption, and the time complex should be about the same. So, and why is this important? Because, first of all, it should help setting the parameters of the scheme. As Alfred was saying, sometimes this is not really taking into account, but at least we forget tight security that we don't have to worry about this. So, how can we get tight security for signature schemes? So, the case that you were talking about that we saw today, for instance, alternatives to get tight security for the full domain hash. The first one to consider that was the PSS, in which you add a random salt to the input of the hash function before completing the inverse. And that hash function, the random salt is usually something like, say, linear in the number of the security parameter. Then later, Cat Swung actually showed that this random salt just needs to be one bit. But the verification is actually a bit different because you don't include the random salt in the signature in the Cat Swung scheme. And, but in the referee, I just have to try both options, both values, which is okay in this case, because you only have two options. Then you have other options, which by Go and Joraki, which is based on CDH. And we just saw today in the first talk of this session, which was worked by Karkv and Kiltz. But they have a tight security proof of Fudum RSAFDH based on RSA. But the reduction at the assumption is the fire hiding assumption. So, what about the exact security of identification schemes? So, actually about 10 years ago, together with Ann, Bellarion, and Prempry, we showed a very simple proof for the Fiat-Chemite transform in the random oracle model, which just assumes that the underlying identification schemes, it's secure against passive attacks. So, in the security, you have a loss QH in the security reduction and some negligible terms that come up. It comes up, for instance, when you have collisions between the way you answer with the programming of the random oracle. When trying to answer signature queries. So, but here we don't say anything about the proof of the, how one proofs the passive security of identification schemes. And usually to prove that, you use some type of rewinding. And the tendency is that you end up having a epsilon square loss in the security reduction to the underlying computational problem. A more direct proof can be given using the forking lemma. But that proof also loses a factor QH. So, can we do better than that? And actually, before Cat Swung, actually I forgot to include in the slide, there was a proposal by Mikali and Raisin that proposed a swap method in which instead of computing the challenge as the hash of the message and the commitment, you compute, you set the commitment as the hash of the message and the challenge. And they were actually able to provide a tight security proof. But it doesn't always work because you kind of need to be able to compute the response for a given value of the commitment. So, for instance, discrete log-based signature schemes, you don't know how to, their method does not apply. And in Cat Swung, it actually shows a different idea. Instead of relying on a proof of knowledge, you kind of, there is a feature mirroristic based on a proof of membership. For instance, the DDH problem. And they showed a very tight reduction to the decision-deferment problem. So, in this work, we actually show, we extend their results to other settings. In particular, we show new schemes based on the decisional short discrete log problem on the ring LWE, on the subset sum. And all of these schemes are actually quite simple. And, but to prove it, we actually, we give a generic proof, we kind of, which kind of formalized the iteration behind the Cat Swung signature. And for that, we, this generic, to give this generic proof, we call, we propose a notion of loss identification scheme. And, and our result actually is generic because it's a, all the, we're talking about the Fiat-Sremier transform here, but there is no QH factor loss in the reduction. So, for the, the rest of talks, I'll first very briefly mention, like, recall passive security for identification schemes and, which will help us understand how it differs from the notion of loss identification schemes. And then I will talk about instantiations. So, as I was saying before, the, the identification schemes that we're considering here are the, of a very particular type that they have three moves with a commit challenge response in which the verified decision is gonna be a deterministic function, function of the transcript. And the challenge will be a random string of a given length. And in the, in, in the work in which we showed before the security and the passive reduction, the notion is actually quite simple. We just imagine that we have this function here called the transcript generation oracle, which given the public key, the secret key, you can generate transcripts for the, for the identification scheme. This is quite simple. And now we imagine the following experiment. I imagine an adversary that obtains the public key of the scheme and has access to this oracle, the transcript generation oracle. And now it can query this oracle as many times as he wants and will come up with a commitment and a state information. And then after it comes up with this commitment, we give, we generate a random challenge and ask him to come up with a response. And it should, the adversary wins the game. If this is a valid response for the commitment challenge, the commitment and challenge that we got. And if it's a, if an ID scheme is secure, this experiment should output one with very, with negligible probability. And what we showed in that paper was actually if the ID is secure, then there is a very simple reduction and the signature scheme, the secured signature scheme is related to that of the identification scheme, but there is a QH factor loss, as I mentioned before. So how can we improve that? So here we're gonna, we end up introducing this notion of a loss identification scheme in which we assume that there is an alternate key generation algorithm, which instead of returning normal keys, we return lossy keys. Since all of the schemes that we have in our paper, the lossy generation, there is no secret key associated with it. We actually, we have a simplified notion of a loss identification scheme, which considered that the loss key generation does not put any secret key, just a public key. So, and what are the properties for such schemes? The first one is the standard completeness property, meaning if a valid proof like generated honestly should get accepted, but here we assume that there is the adversary or the valid prover may abort every now and then and does not return a valid response. So it's not one complete, but it can, one complete would be the case that it never aborts, but there might be a plus cases in which it does. Then there is the simultability property, which is that the transcript can be generated without the knowledge of the secret key. And this is a simplification because all of the schemes that we have in our paper do not need the secret key to generate the transcript, so we simplified, but a more general notion actually can be given that you should just be able to simulate that with the lossy secret key in the public key. Then we have the standard notion which is key indistinguishability, meaning you should not be able to distinguish between lossy key and anomal key. And then the final one is the lossiness, which means that the adversary should not be able to break the identification scheme, but here we're talking about an unbounded adversary when the public key is lossy. And what we show in the paper is that if the scheme actually meets this notion, then we have a reduction which is tied with respect to the key indistinguishability property. And the point here is that such a, this is actually the only part that we use a computational assumption. And the rest here, like the absolute, these are all statistical terms, which, although it's not tied with respect to these terms, the, our secure reduction is tied with respect to the only term which is non-computational. And to compare here, we see that now we no longer have the QH loss in the reduction. And what is the idea? It's actually, the idea is extremely simple in which the, in the proof we, to show that this is the case, we just use the transcripts, just like in the original proof, to simulate the signing oracles. But as in most proofs based on lossy primitives, at one point you just replace the public key with a lossy key. And that we can show very easily that the probability will change by at most a factor, which depends on the, the, in this key indistinguishability, plus a factor QS epsilon S, which is due to, see if the simulation is not perfect when using public as normal key and a lossy key, public key. And then once we are dealing with the, we have a lossy public key, we can very easily argue that the success probability of the adversary in breaking the identification scheme is at most QH epsilon L, which is the term due to the, that comes from the lossiness of the identification scheme. And the QH factor is similar to the, the QH factor that shows up in the other paper, which is at one point we have to guess the hash query that was used in the forgery to be able to break the underlying identification scheme. But here we lose QH factor, but it's with respect to a statistical term. So now let's look at the instantiations. So here even we don't have this one in the paper, but as we said, it's a generalization of the cut one idea. And here you have a very simple protocol based on DDH. The protocol we have the public key, it's a DH topple. And to like the commitment, you simply pick another DH topple for in which you know the randomness R. And then we get a challenge and then you compute CR plus CX plus R where X is the secret key. And you accept if both equations here verify. And the proof is actually quite simple. First, completeness is one, the protocol never aborts in this case. This immutability follows from the zero knowledge property of the protocol. The key indistinguishability follows directly from the DDH assumption, because you cannot tell apart a lossy key from a non lossy key. And the lossiness is a very simple argument that you can see that if the public is not a DH topple, then when given AB, you can show that there exists at most one challenge for which you can come up with a response. So, and because of that, lossness follows the epsilon in this case would be one minus, would be one over keel, the size of the group. So, what about the schemes that we actually have in the paper? So the first one is based on the Geo-Puppa-Stehn identification scheme and uses the short discrete log problem here in which we imagine that X is short but not that short, of course, that it's the secret keys of C bits and the public is G to the X and here we pick Y from a larger range and we simply send U which is G to the Y to the other side and we get a random challenge of size of K bits and then we compute Z, but unlike the original protocol, we actually abort if Z is not in a good range and this, this technique was already used by Lubashevsky in his AsiaCrip 2009 paper if I'm not wrong and he actually, this abort actually helps us obtain better parameters for the scheme and then we accept if Z is in the right range and if the verification and if this last equation works and in fact here, what is the idea for the proof? It's similar to what we did for the DDH case in which once you should not be able to distinguish between non-loss and loss key based on the short decision or short discrete log problem and the lossiness we proved just by a statistical term that we show that once X is chosen from the bigger group then the probability that there is, there exists a valid response for a given challenge that there exists a challenge for which there exists a valid response is negligible and then we also show something similar as you can see even the pictures look the same. Another one based on the subset sum problem and in which has the same characteristics of the one based on the short discrete log and in the proceedings we also talk about the one based on lattices and the ring LWE. So just to summarize in, it wasn't supposed to be there. In this paper we extended the results by cats one to other settings and in particular we gave protocols based on decisional short discrete log, the ring LWE and the subset sum problems and with tight generic proofs for what we called loss identification scheme and actually it seems that our security also holds in the quantum accessible random model because our reductions is history free. So actually I discussed that with Mark and so he said yeah it should work but we didn't check all the details so blame him if it's not a valid statement. So and that concludes my talk.