 Hi, my name is Dennis Diemat and I'm going to talk about our work More Efficient Digital Signatures with Tight Multi-User Security. This is joint work with Kai Gellert, Hibo Jager and Lin Lü. I assume that most of you are familiar with the concept of digital signatures, so I would like to start this talk by giving you an overview of our work. In our work, we construct tightly secure signatures in the multi-user setting with adaptive corruptions. The setting seems to reflect the security requirements we have in many applications, where digital signatures are used as a billing block much more directly than the usually considered single-key setting. These applications, for example, include authenticated key exchange, where digital signatures are used to authenticate protocol messages. Our construction is the first generic construction of this kind, and it is based on loss-identification schemes and sequential op-roofs. Signatures from the sequential op-roofs were originally proposed by Abi et al. at Asia Crypt 02, and further studied by Fischlin et al. at Euro Crypt 2020, and rebuilt upon their work. Our signature scheme is the first tightly multi-user secure signature with adaptive corruptions that is strongly unforeachable, which, especially in the context of authenticated key exchange, gives us a very strong notion of authentication, which is called matching conversations. Besides that, we were able to refine the construction that was considered by Abi et al. and Fischlin et al. to shorten the signatures. Concretely, this means that when insensiated with DDH, our signatures only consist of three ZQ elements, where Q is the size of the Diffie-Hummond Group. All of these properties, so strong unforgeability and short signatures, make our construction a perfect candidate to insensiate tightly secure authenticated key exchange. After giving you a short overview for our work, I would like to get to the details for our work. I would like to start talking about tightly multi-user secure signatures. To that end, I will first talk about the concept of tight security, and then we'll talk about multi-user secure signatures. Before we can talk about tight security, we need to talk about cryptographic reductions. When we want to prove the security of some scheme Pi, let's say a signature scheme, then we first of all need to define a security model. Then we usually pick some problem P, which is assumed to be hard, and show some statement that if the problem P is hard, then our scheme Pi is secure. And in the proof of this statement, what we usually are doing is we take an adversary A, breaking our scheme Pi, and construct from this adversary A an algorithm R, which we call a reduction that solves our problem P. And this works as depicted here on the slide. So the reduction gets an instance of P. It simulates the security experiment for the adversary A, and in the end takes the output of the adversary A and tries to extract the solution to the problem from that output. And if we now have an adversary A with success epsilon, then we get a reduction R with success epsilon divided by L, where L is called the security loss. And keep the security loss in mind, this will become important later. So asymptotically, the security loss does not really play a big role, because as long as it's upper bounded by some polynomial, and epsilon is negligible, then our scheme is considered to be secure. But when we want to use our security proof to choose parameters that are backed up by this proof, we need to take this loss into account. Because in general, larger security loss means that we get weaker security guarantees from our security proof. And this means that we need to choose a harder instance of our problem to rely our security on. And choosing a harder instance of our problem means that we need to choose larger parameters. So for example, a larger group. And the larger group in turn means that we have a much less efficient deployment of our scheme, because we need it to compensate this loss. But what is now a tight reduction? We say that a reduction R is tight, if it runs in about the same time as the adversary. This is just to make the reduction somewhat meaningful. And more importantly, what we require is the following relation between the reduction and the adversary. Namely, that the advantage of the reduction is at least the advantage of the adversary divided by L, the security loss, and this security loss should be small. Preferably, the security loss should be a small constant that is independent of the adversary. And intuitively, now that the security loss for a tight reduction is small, we don't need to compensate anything. And this gives us that we can choose our parameters optimally. And in turn, we get an optimal balance between security and efficiency. Because we don't need to neglect security by not compensating the security loss in favor of efficiency. And we don't need to neglect efficiency in favor of security by compensating the loss. So now that we know what a tight reduction is, I would like to go on and talk about multi-user secure signatures. And to start with, I would like to recap the standard security notion for digital signatures, which is existential unforgeability under a chosen message attack, short EUCMA, which I will refer to as single-user security mostly in this talk. So on the slide here, you see the standard security experiment. So the adversary receives the public key of the signature scheme as input and then gets the opportunity to query for signatures of messages of its choice that are signed under the secret key that corresponds to PK. The adversary can query for as many signatures as it wants. And in the end, it will output a message signature pair m star sigma star and we say that the adversary wins if m star sigma star is valid under the public key and a did not query a signature for m star to exclude trivial attacks. So how do we get a multi-user variant of this? So the notion that we will get in the end is called multi-user existential unforgeability under a chosen message attack with adaptive corruptions, which again I will mostly refer to as multi-user security. Okay, so on the slide, you again see the single user experiment. And to transform this into a multi-user experiment, we at first, instead of only a single public key, we have a number of public keys. So in this example here, we have capital N many public keys. That means we basically have capital N many users in our experiment. Now that we have many public keys, we also need to have N many secret keys. So the adversary needs to tell the experiment for which of the users he wants to see a signature. And this is done by adding this additional parameter u, which basically says, okay, I want to see a signature of message mi under secret key sku. Another thing that becomes only interesting in a multi-user setting is adaptive corruptions. And this means that the adversary gets the opportunity to query for secret keys of a subset of these users. And if it queries for a user identifier uj, it will get the secret key skuj in response. And in the end, similar to the signing queries, the adversary, of course, needs to tell for which of the users this forgery attempt should be valid for. So we add another parameter to that. And in the end, the winning condition is quite similar. So we say that the adversary a wins if m star sigma star is valid under pk u star. If a did not query a signature for m star under secret key u star and a did not query for sku star. Some of you might already know or just noticed while looking at the security experiment that single user security implies multi-user security. And the reduction is basically a straightforward guessing argument. We basically just guess the user u hat for which the adversary outputs a forgery and simulates all the other users ourselves. The problem now is that in this reduction, we are only successful if the reduction guesses u hat correctly. So the adversary in the end really forges for user u hat. That means that the advantage of the reduction is at least 1 over n times the advantage of a. And if you remember the security loss from before, then you see that this reduction is not tight because we have a loss L that is linear in the number of users n. And if you now think for example at authenticated key exchange where the number of users is quite high, this loss is really huge. So the natural question that we now have is how to get rid of this loss. And this question is not that easy to answer because to avoid this guessing and therefore avoid this loss, we need to solve a seemingly paradox situation. Because our reduction now needs to satisfy two important properties. The first property is that the reduction needs to know all secret keys of all users at any time to answer the adaptive corruption queries of the adversary to give out a secret key. And secondly, the reduction needs to be able to extract the solution to the underlying assumption given a forgery while knowing the secret key of this corresponding instance, which seems to be contradictory. In fact, Bader et al. at Eurocrypt 16 showed that for certain signature schemes it is impossible to achieve tight multi-user security with adaptive corruptions under non-interactive assumptions. But we wouldn't be here if there wouldn't be a solution. So let's have a look at this in the next part. So let's come to our construction. Before we can talk about the actual construction, we need to have some basics. The first thing is lossy identification schemes introduced by Abdullah et al. at Eurocrypt 12. So technically a lossy identification scheme is just as a standard identification scheme. So we have a prover that holds a secret key. We have a verifier that holds a public key. The prover computes a commitment, sends this commitment over to the verifier. The verifier chooses a challenge uniformly at random, sends this challenge over to the prover and the prover finally computes a response from the secret key, the commitment and the challenge. And sends it over to the verifier. Now the verifier outputs 1 if this transcript, commitment, challenge, response is valid under the public key and 0 otherwise. And the lossy identification scheme is an identification scheme with certain properties. The most important one is lossiness. And lossiness means that we have an alternate lossy key generation algorithm which only produces a public key. Instead of a public key, secret key pair. And for lossy public keys, it's impossible to find a valid transcript. Another important property is that normal public keys are indistinguishable from lossy ones. Lossy identification schemes have a lot of other properties. Just to mention completeness, which basically just means I honestly generated transcripts are valid. Simulatability, which means that we can produce transcripts without having a secret key. And uniqueness, which will be our main tool to achieve strong unforgeability. But this is not something I want to talk about today. Just to mention these properties. We also use another property for our refinement of the construction, which is commitment recoverability. And commitment recoverability intuitively means that there is an algorithm called sim that on input, public key, challenge and response, outputs a commitment such that commitment, challenge and response are a valid transcript under the public key. So we basically can, given a challenge and a response, we can recover the commitment. So what is now the intuition of our construction? How do we solve this paradox to achieve tight multi-user security? So the basic idea that we use here is a double signature, which was already used by Bata et al at TCEC 15 to achieve tight multi-user security. The main intuition behind that is that we have a signature that consists of a rear component for which we know a secret key, which we use for simulation and a fake component which we can use for extraction. The foundation of our scheme is signatures based on lossy identification schemes by Abdullah et al, which is basically just a Fiat-Shamir transform applied to a lossy identification scheme. So next, let's look at our construction. So the first attempt to achieve such a double signature is to simply use two independent Fiat-Shamir signatures. But this construction has two problems. The first one is which of these two components now is the fake component? They are basically two real signatures, so how do we identify a fake component? And the second problem and the more important one is that this construction here on the slide is not secure at all because the adversary simply can query for two signatures of the same message and then mix up these two signatures and with high probability we'll get a new signature, which then will be a valid forgery. So to overcome this issue, Abe et al in their sequential op-roof technique made these two branches correlated. So instead of computing challenge 0 from commitment 0, they computed challenge 0 from commitment 1 and challenge 1 from commitment 0. This now makes the two branches dependent and an adversary cannot simply mix up these two independent signatures. But now we have another problem, namely that this does not solve our simulation extraction paradox which we had before because now we still have two secret keys and upon corruption we need to give out both. So what would be desirable would be if we only have one secret key for the real component and no secret key for the fake component. So let's assume that we have a public key which consists of pk0, pk1 and only one secret key, let's say corresponding to pk0. This will now be our real key pair, pk0, pk, sk0. So now that we have sk0 we can compute commitment 0. From commitment 0 we can compute challenge 1. And now if you remember what I told you before is that loss identification schemes are simulatable. So what we can do is we can simply choose response 1 uniformly at random and then take our commitment recoverability algorithm to recover commitment 1. And by the properties of commitment recoverability this here will be a valid signature. And this will then form our fake component. And now we have commitment 1. With commitment 1 we then can compute challenge 0 and can compute the rest of the transcript here which will then form our real component. Now there's only one problem left namely that the adversary always knows which of these components is the real one and which is the fake one. And to overcome this issue what we simply do is we generate two key pairs in the beginning pk0, sk0, pk1, sk1 and choose one of the secret keys uniformly at random and discard the other one. So we only keep skb. And therefore we determine our real component. So b, the b branch then always will be our real component and 1 minus b is always the fake component. And this construction that you see here on the slide is exactly the construction that was considered by Abe et al and Fischlin et al. And now we observed that this construction can be refined. Namely we were able to shorten the signatures from commitment 0, commitment 1, response 0, response 1 to challenge 0, response 0, response 1. And this is reflected in the verification algorithm which I briefly want to present as well. Because we don't include the commitments into the signature but we assume that our loss identification scheme is commitment recoverable. We can simply recover the commitments during the verification process. And therefore we only need to include one of the challenges because this challenge then will be the starting point of the following chain. So we can start by taking challenge 0 and response 0 to recover commitment 0. From commitment 0 we can compute challenge 1 which then together with response 1 that still is in the signature we can recover commitment 1. And commitment 1 gives us a challenge 0 prime. And now if challenge 0 prime is equal to challenge 0 that is contained in the signature we say that the signature is valid. Even though we don't use the verification algorithm of the loss identification scheme in the verification it is perfectly correct. This basically follows from the simulatability of the loss identification scheme. Next I would like to talk about security. Fishlin at AL at EuroCrypt 20 showed that the construction I showed you before is tightly single user secure in the non-programable random oracle model. And we observed that this can be lifted to the multi-user setting. And show that the construction in our refined form satisfies multi-user strong existential unforgability under chose message attack with adaptive corruptions. And we in our proof preserve both the fact that the reduction is tight and the setting of the non-programmer in the oracle model. We achieved that by having a real and a fake component as already outlined above that are indistinguishable from the view of the adversary for any user. And this gives us that in the end the adversary will output a forgery for the fake component with probability one half. And thus we can construct a tight reduction with constant security loss to the lossiness of the loss identification scheme. The strong unforgability is something that we get from the uniqueness property of the loss identification scheme similar to the original construction by Abdullah et al. But this is something I don't want to talk about today. For details, please look into our paper. Next, I would like to compare our scheme with existing tightly multi-user secure signatures. There are a couple of multi-user secure signatures, but only two. So the best of our knowledge that are multi-user secured adaptive corruptions. And these two are by Bada et al. TCC-15 and Geosten-Jagar Crypto-18. Bada et al. construction was the first one satisfying this notion and they even introduced this notion in the context of constructing the first tightly secure authenticated key exchange protocol. This construction is in the standard model and it's pairing based. However, the signatures are rather large, so the scheme is rather impractical. There is an almost tight variant with shorter signatures, but as I was informed by one of the authors, there's a flaw in the proof. So this falls out of our comparison. The Geosten-Jagar construction is also based on op-roofs but on parallel op-roofs by Kramer et al. Crypto-94. It requires a programmer random oracle, but it is very efficient with respect to signature size. And public keys. So to give you some numbers, so in the first row you see the TCC-15 construction. You see here the signatures are rather large. So linear in the security parameter, many group elements. However, we have constant size keys and a constant security loss. And for the Geosten-Jagar construction, we have much shorter signatures, very short public keys, only two group elements, but a programmer random oracle. In our scheme, we were able to reduce the signature size to only three ZQ elements. We have slightly larger keys than the Geosten-Jagar construction, but therefore we have strong unforgeability and we don't require a programmer random oracle. I already said in the introduction that our construction is a perfect candidate to instantiate tightly secure authenticated key exchange. And now I want to say a little bit more about this. So tight multi-user secure signatures with adaptive corruptions are the main building block of tightly secure authenticated key exchange because they basically exactly reflect what we need from a signature scheme in authenticated key exchange. And tight security is something that is particularly interesting for authenticated key exchange due to its large-scale use. For example, when you think of TLS, the huge amount of users that are communicating there. So if we have a loss there, then this will be huge. And we show in our paper, for more details, please look at table two in our paper, that the communication complexity of all recently proposed tightly secure authenticated key exchange protocols significantly reduce when they are instantiated with our signature scheme. To sum everything up, we construct the first strong and currently most efficient multi-user secure signature scheme with adaptive corruptions that is tightly secure. Our construction is therefore a perfect candidate for instantiating tightly secure authenticated key exchange because we have strong affordability, which gives us a strong authentication notion for the authenticated key exchange in the sense of matching conversations. And we have short signatures, which gives us an overall efficient key exchange. Thank you very much for watching. And if you want to know more about our work, please consider reading our paper. The link is on the slide.