 Hello, welcome to this talk about the paper lattice-placed signatures with tight adaptive corruptions and more. This is a joint work with Jason Paan from NTNU. As the title of this work already suggests, this talk will be about digital signatures in the lattice setting. So I want to start it with recalling what a digital signature scheme is, which you all should be very familiar with as it is one of the most fundamental cryptographic building blocks. So in a digital signature scheme, we have a user Alice that sends a message M to a user Bob. And the user Bob knows the public key of user Alice, so she can append a digital signature sigma to that message that Bob can verify using the public key. And this will ensure authenticity and integrity of this message. So typically we think of a scenario where we have an adversary that also knows the public key of Alice and can interact with Alice. So he can learn some signatures from Alice and he can then try to forge signatures on arbitrary messages for Bob. And we model this using a cryptographic security game of unforgeability under chosen message attacks. Here the adversary first gets a public key pk and then the adversary can ask for signatures for arbitrary messages of its choice. And in the end, the adversary outputs a message signature pair M star sigma star and he wins this game if this is a valid message signature pair and the message is fresh. So he never learned a signature for that message. Okay. So if we look at this and this is the standard notion, we see that this is not quite what's happening in the real world. Because in the real world, we don't have two users. We have many users and many systems that all have their own key pairs and can communicate with each other and make use of the same system parameters. And we can have an outsider adversary that just observes the situation as before. But we can also have insider adversaries that can corrupt users and learn their secret key material. So we should also model this in our security game. And this leads to the notion of multi-user security under chosen message attacks with adaptive corruptions. In the following, I will just call this multi-user security. Okay. So in this game, the adversary first gets a public keys for N users. Then he can again ask for signatures for messages of its choice. But now the adversary can also specify which user should sign these messages. And then the adversary can, in addition to asking for signatures, he can also ask for secret keys so he can corrupt users. And in the end, he outputs a message signature pair for some target user iStar. And he wins this game if this is a valid message signature pair now for this particular public key. And this user is not corrupted. So he didn't learn the secret key and he didn't learn a message, a signature for that message M star for that user. Okay. So this is a notion that is closer to practical scenarios. And let us compare these two notions. So the left one, the single user security is a standard one and the right one is the multi-user security. So if we look at this, then it is relatively straightforward to see that a reduction can just take this public key and embed it in a random public key of these N many. And in the end, if it guesses the index iStar correctly, then it wins. So we have a straightforward guessing argument that shows that asymptotically these two notions are equivalent, meaning that if you satisfy single user security, you also satisfy multi-user security. And this somehow justifies that the left one is the standard notion. But if we look at the argument in more detail, then we see that the security loss that is related to this reduction is proportional to the number of users in the system. And I will explain to you why this is not good and why we don't want to have that. So to do that, let's look at the typical proof that we have in cryptography. So we have an adversary A that breaks the scheme and we say we can build a reduction that uses A as a subroutine and solve some hard problem. When we do that, we establish a relation between the success probabilities of these two algorithms. And let's call this multiplicative factor the security loss. So asymptotically, you can take any polynomial security loss and you will end up in a secure scheme because if epsilon r is negligible, l is polynomial, then epsilon a is also negligible. But if you want to use that reduction to derive concrete parameters for your scheme, to use it in practice, then this security loss l becomes important. Because if your security loss is, let's say, 2 to the 30, the number of users in the system, then you need to pay 30 bits of security. So concretely, assume you want to achieve 128 bits of security and your security loss is the number of users in the system, as we saw in the slide before, which can be 2 to the 30. Then this tells you you need to set the parameters of the underlying assumptions to support 158 bit of security, which will lead to inefficient concrete parameters. So motivated by this, we want to have tight reductions, tight security proofs, where the security loss l is a small constant and we don't lose that much security. Okay, so, as I showed you, the single user security notion for digital signatures non-tightly implies the multi-user notion. And in fact, the multi-user notion, in addition to being close to real-world application, it also tightly implies other cryptographic primitives like identity-based signatures and authenticated key exchange. So it is somehow clear that if we want to have these primitives in a tight way, then we should better construct tightly secure signature schemes in the multi-user setting. Okay, so let's see what already exists in this area. So let us see what is the state of the art for digital signatures in terms of tightness. So for classical assumptions such as D-Log and RSA, we have tightly secure schemes in the single user setting. And we also have such schemes for post-quandum assumptions such as Lattices. But as I motivated, we should look at the multi-user notion if we want to say something about reality. So let's see what is there. So in the classical setting, we have two quite practical schemes. One is from last year's PKC by D-Mart et al. And the problem is that we don't have any such schemes in the post-quandum setting. So we don't know how to do that. And to fill this gap is our main goal in this work. Okay, so I should mention that there is a folklore approach where you can use an awe-proof and some online extractable NISIG such as the Unruh transform, but that will lead to non-compact signatures in a sense that the number of group elements or vectors that are contained in one signature grows linearly with the security parameter. And this is not efficient, so we are not interested in this approach. So to summarize, our goal is to have a signature scheme with tight multi-user security from post-quandum assumptions such as lattice-based assumptions with compact signatures. Okay, so let's see why this is not easy or not trivial to achieve. So think about the following scenario. You already constructed a scheme and now you want to prove the security of your scheme. Now the question is, which of the secret keys of these N users should the reduction know? Because let's say you know all the secret keys except one, this one. Then even if the adversary just randomly asks for one secret key, then it will hit this with probability 1 over N, meaning that we end up in an untied reduction. And this means that our reduction should always be able to answer corruption queries, meaning that it has to know all the secret keys. But if we know all the secret keys, then we can just forge without using the adversary. And this is a contradiction right to the security of the underlying problem. So this is a key idea that is underlying many impossibility results in this area of multi-user security and tightness. So if we look at these impossibility results, we see that they require somehow more or less than that the secret keys for our scheme are unique. And this already gives us an idea of how to circumvent these impossibility results, namely one to have non-unique secret keys. So let's see how to do that. So our higher level idea is that we have a public key that is composed out of two public keys of some underlying scheme. And a secret key is one of the two secret keys. So the reduction knows one secret key for each of the users and which one this is chosen at random. Now the signatures should also not reveal to the adversary which secret key the reduction holds. So this should be some witness indistinguishable proof. And finally we also want that the forgery that the adversary returns can help the reduction if it is with respect to the key for which the reduction does not know a secret key. And this means that in the end we succeed with probability one-half and we have a security loss of one bit which is tight. Okay so that's a high level idea and to implement this we need some form of or proof to hide which secret key the reduction actually knows. And this high level idea is the core of the theme at our framework from last year's PKC and they achieve tightly secure signatures in the multi-user setting from classical assumptions. So let's see how this framework works and how we can transfer it or let's say generalize it to achieve the same thing in the post-quantum setting. So DEMA that I'll start with a lossy identification scheme with some random self-reducibility property. So I will tell you what a lossy identification scheme is in a few slides. And then they use a sequential or proof which is just one form of or proof to transform it into a digital signature scheme. And in the end they can instantiate their framework from classical assumptions such as DDH. Our first step is to generalize this notion of a lossy ID and random self-reducibility to a so-called multi-key lossy identification scheme. And then in the next step we show that you can achieve this multi-key lossy identification scheme if you start with a dual mode commitment. And then we instantiate this dual mode commitment from LWE and a group action based assumption which captures isogenes. And finally we also show that the DEMA that our instantiations are actually also instantiations of our more general framework. So in the following I want to look at multi-key lossy identification and how we can construct it from LWE. Okay, so what is lossy identification? We start with a canonical identification scheme where a prover holding a secret key can convince a verifier holding a public key that he knows the secret key. He does this by sending a message R, receiving a challenge C, and sending a response S back. An example is the Shenor identification scheme. And then the verifier can verify this transcript and we want to have completeness in a sense that the honest prover can always convince the verifier. Now this is a canonical identification scheme. What is a lossy identification scheme? A lossy identification scheme is a special form of this identification scheme that has two modes of public keys. So first we have a normal mode, which is what I explained to you on the slide before. So here we have completeness. But we also have a lossy mode public keys, which are generated without a secret key for which we have some form of statistical soundness. So even an unbounded adversary cannot convince the verifier with non-negligible probability. And finally we want to have that these modes are computationally indistinguishable. And AFLT, which introduced these primitives, show that this tightly implies single user security for fiat-chamia signatures. So we want to look at multi-user security. So we generalize this to multi-key lossy identification. And here we have basically the same guarantees, but now we want that N public keys are computationally indistinguishable from N keys that are output by our lossy key generator. And they can possibly be correlated in some sense. Okay, so we know that we can have a multi-key lossy identification scheme that implies a signature scheme using the sequential or proof technique S&D metadata framework. And the question is how can we construct this scheme from lattice assumptions? So this is what I will tell you in the following slides. Okay, so first of all, what is the problem here? So we want to show that there is some way to generate N keys that is computationally indistinguishable from N lossy keys. And in particular, as we want to have tight security, this indistinguishability should not depend on the number of keys that we have here, because that will correspond to the number of users in our system later. Now for lattices, or in general, whenever you want to show this tightly, the way to go is random self-reducibility. But for lattices, we only have this for the plain versions of the problem in a sense that the number of samples is not important for the security guarantees that you have. And for structured variants such as ring LWE or module LWE, we don't have this guarantee. Unfortunately, if we look at existing lossy identification schemes, we only know schemes for the ring and module version of this problem. And they make use of the ring structure, and it's not clear how to transform them to the plain setting. So our goal is to have a multi-key lossy identification scheme from plain LWE. And to do that, we do not rely on these previous ideas, but instead we rely on something that seems to be independent, namely regf encryption. So in regf encryption, your public key is a matrix A, which is basically an LWE sample. Now your ciphertext for a message with some randomness set, which is typically a Gaussian, is A times set plus an encoding of the message. So the details do not matter here, but we can already use this to construct our lossy mode for our identification scheme. So in our lossy mode, we first send a random ciphertext to the verifier, and then we get back a random message. And now our challenge is to open this ciphertext to that message by sending the randomness set. And of course, this is an encryption scheme, so for each ciphertext, there's only one plain text that you can actually open it to, and therefore even an unbounded adversary has no chance, essentially, of winning this game, except with negligible probability. Okay, so this is the lossy mode. Now we also need a complete mode, like a normal mode of our identification scheme. And to do that, we generate a matrix with a trapdoor. And such a trapdoor allows you to sample Gaussian or short pre-images of the function that is induced by A. And this is statistically close to a uniformly random matrix, which is computationally close by LWE to our lossy mode key. And then our final identification scheme, which uses R as a secret key, works like this. You send a random ciphertext, you get a random message. And now you can use your key, your trapdoor, to sample the randomness set that opens ciphertext to the message M. Okay, so what I showed you now is how to use LWE to get a multi-key lossy identification scheme, and we know how to turn that into the signature scheme that we want. So this is good, but let's take a step back. So we identified here that the regf encryption scheme has some form of dual mode property, where you can generate the keys either to use it as an encryption scheme, or if you generate it differently, the keys, then you can use it as a perfectly hiding commitment, a trapdoor commitment, basically. So we abstract this in the notion of a dual mode commitment scheme, and we show that you can achieve this from LWE as I showed you, and also from a group action-based assumption, and also from the classical assumptions. So this is our final framework. Okay, to summarize our results, we have a generic framework from dual mode commitments to tightly secure signatures in the multi-user setting. And we can instantiate it from variety of assumptions, even post-quantum assumptions such as LWE or isogenes, and it gives us the first tightly secure post-quantum signature scheme in the multi-user setting. And the second contribution is an impossibility result, which I didn't cover in this talk at all. But what you have seen is that this framework of D-Metadal and also our final framework use these sequential OR proofs. So this is a form of OR proof. But there is also a different prominent form of OR proof, which is a parallel OR proof. And in our work, we show that using parallel OR proofs, you cannot achieve tightly secure signatures in the multi-user setting. Yeah, so if you're interested in that, I did not cover it in the talk, you can look into our paper. Okay, so I want to conclude with some open problems, which are related to our results and which can be useful for future work. So we constructed a tightly secure signature scheme in the multi-user setting from LWE. But what about other assumptions like search assumptions like SIS or structured assumptions like ring LWE, ring SIS and so on? This will definitely give us more efficient scheme than what we have so far. And also our scheme is proven secure in the random oracle model, but for post-quantum assumptions you would like to have the quantum random oracle model or the standard model. So this is the other direction, what about the model, right? So can you achieve a more desirable model? Okay, so that's it from my side. I thank you for your attention. And if you're interested, you can look up our paper on eprint. Thank you.