 So the next talk is on the impossibility of tight cryptographic reductions by Christoph Bader Tibor Jagger Young Lee and Sven Sheik and Tibor is going to give the talk. Thanks for staying so long to the last talk. So yeah, I'd like to start this talk with explaining or maybe recapitulating what we mean by tight reductions and cryptographic security proof. So what we usually do when we approve security of a cryptosystem, like for example a digital signature scheme, is that we first describe a security model. For this example here, we have an adversary which receives as input a public key is then allowed to output messages, receives back signatures for these messages and it wins this experiment or breaks the security of the scheme if it is able to output a valid message signature pair. And what we then do in order to prove that a given signature scheme is secure in the sense is that we describe a reduction. And this reduction is an algorithm which takes this adversary, uses it as a subroutine to solve some computational problem which is assumed to be hard. Now we say that the reduction is tight if essentially the running time and the success probability of the reduction are about equal to the success probability and the running time of the adversary. And about equal means here that they should be identical up to a small constant factor. Why is it interesting to consider tight security? So first of all, it's an interesting theoretical question. We want to develop techniques which allow us to and get tight security proofs for given constructions and there are many examples of constructions for which you do not know any tight security proof and then it's interesting to understand the reason. So maybe there are inherent reasons that we cannot overcome. Maybe it's sometimes impossible to get tight security. And the second motivation is more practical. So if you want to instantiate a crypto system in practice, then at some point we have to decide on cryptographic parameters like the size of modularize, size of algebraic groups and so on. And if you want to do this in a theoretically sound way, so in a way which kind of respects the security guarantees that we get from our security reductions, then we have to take the tightness of the reduction into account. And this means basically that if we have a non tight reductions, then we have to choose relatively large parameters. Well, if you have a tight security reductions, then we can choose small or essentially optimal parameters. So there are many examples of tightly secure crypto systems from all different domains of cryptography. So we have digital signatures, IBE, pseudo random functions, key exchange and so on. And in particular, the best paper award of this conference goes to a paper which constructs probability key encryption which is tightly secure. So I guess one can say that tight security is an important research topic in cryptography. And there are also many examples of crypto systems for which we are not able to prove tight security. And then it's of course interesting to understand the reasons why or what is the difference between tightly secure constructions and non tight security constructions and which properties must the crypto system have or not have in order to allow for tight security proof. A similar result in the domain of tightly secure crypto is due to Cora and Cora considered digital signatures in a single user setting. Single user setting means that the attacker receives this input a single public key. And he considered so-called unique signatures where unique signatures means essentially that for each message, there's only one unique signature which is accepted by the verification algorithm. And in the context of Cora's paper, one should always mention the follow up paper by Cacvi and Kils from EuroCrypt 12, which corrects a subtle issue in this work. So what Cora does is he considers this a security experiment for signatures and he essentially shows that if a signature scheme satisfies this property that it has unique signatures, then it is impossible to get a tight security proof for the scheme. More precisely, any security reduction which turns an adversary in the sense into an algorithm which solves a computationally hard problem has to lose a factor of at least one over Q where Q is the number of signatures requested by the adversary. This result is proven as follows. So Cora assumes the existence of a tight reduction and he shows that the existence of such a reduction implies an algorithm called the meter reduction. So what we are doing here is we kind of apply a reduction against a reduction and this meter reduction is able to solve the computational problem on its own, resulting to an adversary. The difficult task for the meter reduction is that it has to simulate all oracles expected from the reduction. So in particular, it has to simulate the adversary. So the key ideas behind Cora's technique is to develop a technique which allows an efficient simulation of the adversary without breaking any cryptographic hardness assumption. And more precisely, what Cora shows is that if a signature scheme has unique signatures, then any reduction, regardless whether it's tight or not, so any reduction implies an algorithm M which solves a computational problem P in about the same running time as the reduction. And with a success probability epsilon M, which is at least the success probability of the reduction minus one over Q, and this basically implies that the success probability of the reduction epsilon R cannot be significantly larger than one over Q. So the reduction loses a factor of Q and this is only part of the story. So what actually shows is that this inequality here holds. So we have an additional term here, which I'm going to call the annoying term in the rest of this talk. Okay, for Cora's result, this term is actually not very annoying, because if you consider signatures in a single user setting, then we have Q, the number of signature queries polynomial bounded, so it's very small. We have the message space exponentially large. So everything inside this red box here is very close to one. Okay, so basically we get more or less this result here up to a small difference, which is incurred by this last annoying term here. There are some limitations of Cora's technique. The first one is that it's not able to consider any reduction, but it has to consider a restricted class of reductions. So basically he looks at reductions which treat the adversary as a black box by just executing it. And there are only few advanced capabilities of the reduction. So the reduction is allowed to run the adversary several times sequentially. But for instance, it is not allowed to run two copies of the adversary in parallel also. So this is a restriction, but it's not a very bad restriction or a very, very strong limitation, because most reductions that we know in cryptography satisfy these conditions. And the second minor limitation, of course, technique is that the analysis is relatively complex. The most important limitations of Cora's technique is this annoying term here, because as I have explained on the previous slide, this term makes the techniques of Cora only applicable in settings where we have that queue, or whatever replaces queue in a different setting here, is much smaller than the size of the message space or an equivalent of this. This is acceptable if you consider digital signatures in a single user setting as done by Cora, Capricills and other papers, but it makes it very difficult to apply these techniques to different settings. One particular setting that we would like to consider is the more realistic notion of multi-user security for signatures, which is more close to the real world. And in this case, we have an adversary which receives as input not a single public key, but a list of public keys, PK1 to PKN. And still, the goal of the attacker is to output a message signature pair. And as in the standard similar user experiment, the attacker is allowed to issue a signature queries. So in this case, he would output a message along with an index J, which points to one of the public keys, and he receives back a signature on the, which is computed using the secret key that belongs to public key J. And second, we want to allow the attacker to adaptively corrupt users, which means that the attacker may output an index J and receives back the secret key SKJ, which belongs to this public key. And if you want to have meaningful tightness results for practice, for real-world applications, then we actually want tightness in a security notion of this type here. So we want tightness in both the number of signatures Q and also in the number of public keys N that the adversary sees. Now, usually, it is sufficient to consider the single-user setting, which is much simpler and it's much easier to write proofs in the setting, because a single-user setting is known to imply multi-user security, but the reduction is not tight, it loses a factor of 1 over N. Essentially, because the reduction has to guess the public key for which the adversary outputs a valid signature at the end. So what happens if you try to prove that this 1 over N security loss is inherent and cannot be avoided? So our natural approach would be to apply Cohen's technique, and what we want to show using this technique is that we have this inequality here, epsilon M is lower bounded by epsilon R minus 1 over N. So that's what we would like to show. What we get using Cohen's result is this here. Now, we have this additional annoying term, and this time it is particularly annoying, because if you look at it for a second, then you can see that everything inside this red box here is equal to N. So we get 1 over N times N is equal to 1, so the lower bound that we get is epsilon M is larger than epsilon R minus 1, which is trivial. It's completely pointless to have this result. And we want to overcome this limitation. So our goal is to prove that the security loss of 1 over N is impossible to avoid, and to achieve this, we proceed as follows. First of all, we define a weaker security notion than the multi-user security setting that I've just shown you, which is somewhat counterintuitive, because we want to prove an impossibility result, and the weaker the security notion is, the harder should it become to prove that this security notion is impossible to achieve as a tight security reduction. But it turns out that this weaker security notion is extremely helpful to develop a new meta reduction technique, which gets rid of this annoying term. So in a sense, we've made our life harder, but still we're able to get a useful result. And the reason is that this weakness of the security definition provides exactly the leverage that we need to prove our results. And finally, given that we have this new meta reduction techniques, which is much simpler than previous techniques, we can generalize it to consider many other settings. So first of all, the weaker security definition looks like this. As before, we have an adversary which receives this input and public keys, and the adversary is still allowed to corrupt users, but not adaptively. So after seeing the list of public keys, the adversary has to pick one index J, outputs his index J, and receives back all secret keys for public keys PKI, where I is not equal to J. And second, the adversary does not have to compute a signature, or to forge a signature, but it has to the computer secret key which belongs to public key PKIJ. This is an extremely weak security experiment for signatures, which should be very easy to achieve in practice. But as I've already explained very briefly, so if we can show that there is no tight security proof for this very weak security model, then it's immediately in price that we cannot achieve security in any stronger notion, in particular in the strong multi-user security sense that I've showed you a couple of slides before. And a second thing that you may have noticed here is that the security experiment is not specific to digital signatures anymore, but it makes sense to any public key scheme, like public key encryption, identity-based encryption, and so on and so on. So in a sense, we have already generalized our results a bit. Second, we developed this new meter reduction technique, and the idea, now first of all the result of this technique, okay, we get rid of this annoying term, we consider exactly the same class of reductions as core, so we have reductions that treat the adversary as a black box and have the same few capabilities as core technique, but a big advantage of this technique is that it's very much more simple to analyze. The intuition behind this bound is as follows. So intuitively, we want to prove that in the moment when the reduction outputs this list of public keys, it has committed to a single public key for which it is able to both... So it has committed to a single index J, such that it is able to output all secret keys that belong to the public keys except for public key J, and at the same time it is able to output... It is able to take the secret key J from the adversary and leverage this value to somehow compute a solution to this computational problem. And now if you look at this moment here, right after the reduction has output the public keys, then we can have two different cases here. The first case is maybe at this point here the adversary has committed to a single index J, such that it is able to proceed the experiment. In this case, the reduction cannot be tied because if you have an adversary which simply picks J at random, then the reduction can just hope that it will pick the right value to which it has committed, and otherwise it will not be able to continue. And the second case is, so if the reduction is tied, we must have that there is more than one value of J, and in this case the problem P cannot be hard. And what we have to prove in this case is the second here. So in the rest of the talk I want to focus on the second claim here. So what we do is following Kuro's approach, we construct a meter reduction. When this meter reduction works as follows, it receives this input and instance of the computational problem P, forwards this instance to the reduction, starts the reduction and lets it run until it outputs the list of public keys. And when the reduction has output all these public keys, the meter reduction stores the internal state of the reduction. And in this instance it takes a snapshot of the current state of the reduction. That's the first step. And then the meter reduction iterates through all possible possible values of J. So it executes reduction R starting with J equal to one, and sees if it outputs on the list of secret key which are not equal to SK one. And then it repeats this process with J equal to two and so on and so on for all possible values. And it builds a meter reduction to learn all secret keys because there are at least two indices for which the reduction kind of outputs all secret keys. So what we get from this computation is we get a list of all secret keys. And now we can use this list of secret keys simply in a very simple way to simulate an efficient adversary. So we execute the reduction once more starting from the snapshot state that we have stored and we simulate an adversary which simply picks an index J uniformly random and we receive back from the reduction list of secret keys. And finally we are able to output SK J. Because we have received it from the reduction. And because we assume that the reduction works it will output a solution to the problem P and we can output it as well. So this provides a perfect simulation of a successful adversary and therefore the reduction will produce the solution to problem P for us. Of course this doesn't work as generic as I have just explained. And so we have to make or put up some restrictions or some requirements on the public key scheme that we are considering. So what we essentially need is that for each public key there is at least there is either one unique secret key. This holds for many examples of crypto systems like like Elgar Maier, Schnorr, signatures, DSA signatures and so on and so on. And second, we need that one can efficiently verify that a given secret key belongs to a given public key which also holds for many examples of constructions. And one can generalize this to a notion that we call re-randomizable secret keys. I do not want to explain this into too much detail but it basically means that given a public key which can have many corresponding secret keys and so if we have this public key and the matching secret key we can sample uniformly random from the set of all secret keys that match this public key. And the result that we get from this technique is that we can show that a public key scheme, so any scheme not only signatures but also public key encryption and so on that satisfies the both conditions cannot have a tight security proof in a multi-user setting. So it's impossible to avoid this security loss. Okay, finally we generalize this approach and this is from my point of view the most interesting part of this result. So our goal is to make this impossibility result easily applicable to two different scenarios beyond public key crypto and the multi-user setting. So what we do is we first describe a generalized experiment which kind of condenses the required properties that we need and then we can derive many impossibility results for instance for multi-user security with corruptions but also for other applications for example digital signatures in a single user setting as considered by a corral, coffee kills and so on and also for more applications. So in the paper we also have non-detected key exchange but everything generalizes also to say authenticated key exchange also. And this generalization essentially works as follows. So we consider an abstracted security experiment which looks pretty much like the experiment that I've just showed you. So we have a relation S. S contains tuples X, W where X is a statement and W is a witness that this statement is in this reduction in this relation. And we have an experiment, kind of security experiment for this relation. So the adversary receives his input and statements. Outputs an index J receives back witnesses for all statements which are not equal to XJ. And finally the attacker has to output a witness W and J. And if we make a similar requirement in this relation here so we need that it is efficient to verify that the given tuple X, W lies in this relation and that witnesses are either unique or re-randomizable and then we can easily use this technique for example to get the result that I've just showed you so if you want to consider a public key curve to a multi-user setting, we just define the relation such that it contains the... it contains tuples consisting of public keys and corresponding secret keys. But we can also consider signatures in a single user setting. In this case we would define the relation such that it contains N different messages and we can take arbitrary N messages from the message space like the first N messages according to some order and the witnesses would be signatures over these messages. Yeah, to summarize, so we have developed new techniques to prove, formally prove that in certain cases type reductions are impossible to achieve. The results are stronger because we consider this weaker security model than previous works but the proof is also extremely simple. We have additional applications which work in cases where we do not have something like an exponential size message space because we do not need this annoying term and we can also use this technique to derive criterions which allows to check very easily whether a given scheme can have a tight security proof or not. So for example if a scheme has unique secret keys or some unique signatures, then one cannot hope to get tight security in a multi-user setting with corruptions. And finally, using this master theorem it's very easy to adapt this technique to other settings by just describing a suitable relation and showing that this relation satisfies the properties that we need. And that's the end of my talk. Thank you very much for your attention. Yes, of course. So can you use the master theorem to get an impossibility result for tight reduction in the multi-user setting but a weaker multi-user setting where you don't get the secret key you just attack one of whatever out of N public keys like a signature scheme. So first of all this would probably be a weaker impossibility result because the security that you're describing gets stronger. It would be possible if we are able to define a suitable relation and then I guess we would have to look more closely at a particular security experiment that you have in mind. But if you can describe a suitable relation which satisfies the property then it's fine. Any other questions? So just to follow on that that is why your result is not in conflict with the results tomorrow because that results in the static corruption model as far as I understand. Which result are you referring to? The best paper award tomorrow. Ah, okay. I think this construction doesn't satisfy our conditions because there is no such thing like a unique secret key. So you have this now a wrangled pair F embedded in secret keys and then this is... But also it's in the static model for corruption. Static model for corruption. For static corruption. Ah, okay, wait. We have two dimensions. So one dimension is the number of ciphertexts that the adversary sees and the second dimension is the number of public keys that the adversary sees. So in the number of public keys I think, correct me if I'm wrong, I think that this construction does not have unique secret keys, right? Because you have this re-endomization there's now a wrangled pair F which is somehow embedded in the secret key. We have one more question. Yeah. Since the mirror reduction runs the reduction n times, isn't then the running time of the mirror reduction n times the reduction? It is. Aren't they supposed to be close? So, exactly. The running time is n times the running time of the reduction. The only thing to show is that if there exists a tight reduction then we get a polynomial time algorithm for solving a problem for which no polynomial time algorithm should exist. And therefore it's okay to run the reduction n times. Any other questions? Okay, let's thank T Boroghe.