 Hi, my name is Lin Lu. Welcome to the talk of our Azure Crypt 2021 paper, Digital Signatures with Memory-Tied Security in the Multi-Challenge setting. This is joint work with Dennis Deemott, Kai Gellat and Tibor Jagger. We are from the University of Wubatow. In this talk, I will first provide some background knowledge about memory-tight reductions and digital signatures. I will also talk about security of digital signatures in the Multi-Challenge setting. Then, I will recall some impossibility results in getting memory-tight digital signatures in this setting. In the second part of this talk, I will talk about how we calculate these impossibility results, and our approach can be divided into two steps. In the first step, we achieve a weak memory-tight security in the Multi-Challenge setting, which recall the MS-US CMA1 security by the help of a special kind of reductions, which recall the canonical reductions. And in the second step, we propose a generic and memory-tight transformation from the MS-US CMA1 security to the MS-US CMA security. And next, I will talk about the instantiations of our approach and compare them with existing schemes. Finally, I summarize our work and raise some interesting open problems. First, let's talk about security reductions. When we build some cryptographic scheme and want to prove certain security of the scheme based on some assumption, what we usually do is to build a reduction R, which transfers any adversary breaking the security of the scheme into a problem solver. In this work, we are interested in non-interactive problems which can be a computational problem like CDH or RSA, or it can be a decision problem like DDH. The reduction gets an instance pi of this problem and simulates the initial input of the adversary. When the adversary makes some oracle queries to the reduction, the reduction R will simulate the oracle and the responses for the adversary. Finally, the adversary outputs something and the reduction will use this output together with this view to extract some solution S to the problem pi and output the solution S. We usually measure the quality of our reduction while it's running time and it's success probability or advantage. We say a reduction R is tight if the overall running time of the reduction is approximately the same with the running time of the adversary and the success probability or the advantage of the reduction is approximately the same with the success probability or the advantage of the adversary. It is very often that the reductions are not tight. For example, we often see reductions such that the running time is tight but the advantage reduces by a factor of L. This factor L is called the security loss of the reduction R and it is not tight if L depends on the adversary. For example, L equals to the number of queries made by the adversary. Elbach, Cash, Fesh and Kills noticed that actually memory is also a valuable computational resource and should be considered as a measurement for reductions. They propose the concept of memory tight reductions and the reduction is memory tight if the overall memory consumed by the reduction is approximately the same with the memory consumed by the adversary. This means that the additional memory consumed by the reduction itself is little compared with the memory consumed by the adversary and it's independent of the adversary. Providing memory tight reductions for cryptographic schemes is of great importance especially when the underlying problem is a memory sensitive one. In this talk, we will call a reduction fully tight if it is tight in terms of time advantage and memory. Next, let us briefly recall the public key primitive of digital signatures. A digital signature scheme allows a secret key holder to authenticate any message by generating a signature for this message using the secret key. Anyone who has the corresponding public key can publicly verify the validity of the signature for this message using the public key. And for security of digital signatures, the most commonly accepted security is the existential affordability and their chosen message attacks security or UFCMA security. In the UFCMA security game, an adversary gets a public key as initial input and can have multiple attacks to a side oracle. In each query to the side oracle, the adversary adaptor selects a message MI and gets a signature Sigma I for MI. In the end of the game, the adversary makes a false query by submitting a message signature pair and start Sigma star to the challenger. And UFCMA adversary wins if the final forgery is valid, which means that the signature Sigma star is a valid signature with respect to message M star. And the message M star has never been signed before. We are also interested in a stronger version of this security, which is called the strong UFCMA security. And in the S UFCMA game, we relax the winning condition for the adversary and allow the adversary to win, even if the message M star has been signed before, as long as the signature Sigma star is new. And putting this in some other way, this means that the final message signature pair is not one of the message signature pairs that has been queried and answered by the side oracle. Both of these two securities consider a single challenge adversaries where the adversary can only make one last false query at the end of the game. However, for memory tightness, we are particularly interested in multi-challenge security. Albert and all of formalized a multi-challenge UFCMA security, which we call the M-UFCMA security. In this M-UFCMA security game, an adversary gets multiple chances to make false queries at any time. The adversary wins if there exists at least one valid false query among them. Here, we also consider a non-strong versions and call the strong version as MS-UFCMA security. If we do not consider memory, then single challenge UFCMA security tightly implies the multi-challenge UFCMA security. The reduction can simply use memory to store all the message signature pairs for the multi-challenge adversary and test whether a forgery pair is fresh or not and outputs the first fresh pair as its own forgery. However, we cannot assume that all the single challenge adversary will use memory to store the message signature pairs and a tight reduction is not obvious if memory is taken into consideration in this multi-challenge setting. Previous works have even shown some impossibility results for digital signatures in terms of memory tight multi-challenge security. More precisely, Albert Hattah show that for certain black box reductions from the multi-challenge security to the single-challenge security, it must be non-tight with respect to memory or time. They provide a memory tight reduction for the RSA Fudume hash signature scheme, but it is not tight in terms of advantage or time. One at all generalizes their results and show that certain natural black box reductions from the multi-challenge security to any computational problem must be non-tight with respect to memory or time. They also show some lower bounds for the security in the multi-user setting and clearly resistant harshing setting. It seems that we only have some bad news for memory tight signatures in the multi-challenge setting. And in our work, we propose an approach to curcumin these lower bounds. Instead of focusing on the properties of reductions that would make the impossibility results hold, we focus on finding which properties of the reduction are are sufficient to achieve memory tightness in the multi-challenge setting. However, we need to keep in mind the lower bounds proposed by Alba Hathall and Wang Hathall about black box reductions in the multi-challenge setting. So what we consider is non-black box reductions. And furthermore, we consider weaker security notion in the multi-challenge setting. However, even in this weaker setting, we still face the main challenge that the reduction R must be able to distinguish fresh forgery from replayed message signature pair without using memory. Our first step to both solving this problem is that we consider weaker security notion, which is the one signature per message security or the CMA1 security. The CMA1 security game differs from the classical CMA security in the sense that if the adversary queries the sign oracle for the same message multiple times, the CMA1 game will only generate a fresh signature for the first query and the adversary will receive the signature at the response of all the other queries for the same message. That means for each message, the adversary only gets one freshly generated signature. And note that usually we still need memory to store message signature pairs in order to reply the same signature to the adversary when the same message is queried. However, we observe that if there exists a reduction R for the CMA1 security, such that R simulates the signature in some deterministic way, then intuitively R can deal with all the sign queries and forward queries without storing any of the message signature pair. The reason is that R could run the signature simulation algorithm each time it receives a sign query. Since this algorithm functions in some deterministic way, the same signature will be generated for the same message. For the false query, the reduction R could determine whether the message signature pair m star sigma star is fresh or not by first simulating the signature for m star and then compare sigma star with the simulated signature. If m star sigma star is a replay, then it must be the case that sigma star equals to the simulated signature. And alternatively, if sigma star does not equal to the simulated signature, then m star sigma star must be a fresh pair. And in this way, R could deal with all sign queries and forward queries without storing the message signature pairs. And we formalize this intuition as canonical reductions. We say a reduction R is L delta canonical if it transfers any strong UF CMA1 adversary A into a problem solver for some interactive problem. And reduction R functions as shown in this figure. A canonical reduction R takes a problem instance pi as input and we'll use the algorithm R gene to simulate a public key and secret key. The simulated public key will be provided to the adversary A and the simulated secret key will be used to simulate signatures. The adversary can sign, can make sign queries and the canonical reduction R will use algorithm R sign to simulate signatures. If random oracle model is considered, the adversary can make random oracle queries and the canonical reduction R will use the algorithm R hash to simulate the random oracle. At the end of the game, the single challenge adversary makes its forge query. The canonical reduction R will use a check algorithm to check the message signature pair. And the check algorithm outputs one if and only if sigma star is a valid signature for m star and the sigma star does not equal to the simulated signature for m star. If the check pass, an algorithm R extract will be run to extract a solution S. Otherwise, if the check fail, the algorithm U will be run to generate a trivial solution S. And finally, R will output the solution S and terminates. We note that for canonical reductions, the algorithm R sign, R hash, R extract and check are all deterministic algorithms, but they have access to the same random function. Finally, we require that the canonical reduction works, which means that the advantage of R is larger than or equal to the advantage of A divided by L minus delta. And this is the definition for canonical reductions. And note that canonical reductions are restricted in several aspects. First, it only deals with single challenge adversaries. And more precisely, it only deals with S UF CMA1 adversaries. Secondly, even though canonical reductions do not store message signature pairs, they are not necessarily memory tight because R will also need to simulate the random function that is accessed by the algorithms. And this takes a lot of memory. And thirdly, canonical reductions works for the CMA1 adversary. But not for the standard CMA adversary. Even though canonical reductions are restricted in these aspects, it still serves as an important tool in our approach. With this help, we can prove an important theorem, which is the main theorem of our work. And the theorem states that if sigma is a signature scheme, pi is a non-interactive problem and R is an L delta canonical reduction from breaking the S UF CMA1 security of sigma to solving the problem pi, then using the canonical reduction R and the memory tight secured pseudo random function, we can build another reduction R prime from breaking the multi-challenge MS UF CMA1 security of sigma to solving the problem pi. Such that for any adversary A prime, the overall running time and memory of R prime is approximately the same with the running time and memory of A prime. And the advantage of R prime is approximately the advantage of A prime divided by L minus delta. The proof idea of this theorem is that we construct R prime using the algorithms of the canonical reduction R and instantiate the random function using the pseudo random function. When the reduction R prime runs, the check algorithm for every forge query made by the multi-challenge adversary A prime and only use the first forge query which can pass the check to extract solutions. Note that now R prime is tight in terms of time and memory. If the parameters are good for the canonical reduction R, which means that L is a constant and delta is either statistically small or negligibly small but it's independent of the adversary, then the reduction R prime will also be tight in terms of advantage, which means R prime is fully tight for the MSUF CME-1 security of Sigma. And this is a nice result. We get fully tight results but we still need to seek a way to upgrade the security from CME-1 security to the standard CMA security. And here comes our second step. We propose a generic transformation which achieved this goal and preserves tightness for all three dimensions. Suppose Sigma prime is a signature scheme which transfers Sigma prime to another signature scheme Sigma by involving an additional random NASN in the signature. The NASN is chosen uniformly random from bit strings of lens to lambda and then signed together with the message. We can prove the theorem that if Sigma prime is memory tightly MSUF CME-1 secure, then Sigma is memory tightly MSUF CME-1 secure. And actually we can prove more than what we show here this transformation is not only preserves memory tightness but also preserve tightness in terms of time and advantage. The proof idea of the theorem is that we can simulate the CME game based on a CME-1 game. The CME anniversary will query the same message multiple times in the CME game. However, the nonces are chosen uniformly and independently which means that they are likely to repeat. So the message together with the nonces will also not repeat. And this completes our approach and let me summarize it here. We first assume that for signature scheme Sigma prime there is an L delta canonical reduction R which proves that Sigma prime is SUFCMA-1 secure when the noninteractive problem Pi is hard. Then in the first step we construct a memory tight reduction R prime using R and R prime proves that Sigma prime has memory tight MSUFCMA-1 security under the same assumption. Now we successfully go from single challenge setting to the multi-challenge setting but we are still in the CME-1 case. Then we apply our second step to transfer Sigma prime to Sigma with the help of nonces and Sigma has memory tight MSUFCMA security. Furthermore, if L is a constant, delta is small and independent of the adversary then Sigma is a fully tight signature scheme in all the three dimensions. To instantiate our approach we only need to construct a signature scheme Sigma prime and provide a canonical reduction proving the single challenge SUFCMA-1 security of this scheme. We are able to do this for three different schemes. The first scheme is based on a lossy identification scheme by Abdelato. We show a one delta canonical reduction for their lossy ID based as signature scheme. The underlying non-interactive problem or the assumption is that the lossy public key is indistinguishable from normal public key for the lossy ID schemes. And the delta here is statistically small. So by our generic approach we can get a fully tight MSUFCMA secure digital signature scheme. And the second scheme is the RSA Foudou Mahesh scheme proposed by Blair and Rockaway. Abdelato they proposed a reduction for the RSA Foudou Mahesh scheme and this reduction can be viewed as E times QS, zero canonical reduction where E is the oldest number and QS is the number of sign queries made by the adversary. And if we apply directly our approach on this reduction we will not get a fully tight scheme because the security loss here is large. However, Kasam and Wang proposed a slight variant of RSA Foudou Mahesh scheme which we call the RSA FDH plus scheme and we can provide a two zero canonical reduction to the RSA problem for the RSA FDH plus scheme and plugging this reduction into our approach we can get the second fully tight MSUFCMA secure digital signature scheme and also we get a similar results for the Bonin-Ling-Shachman scheme or the BLS scheme and this table shows a comparison between these schemes and you can see that we get three fully tight MSUFCMA secure digital signatures and the cost to achieve a fully tightness is a two lambda bits expansion in the signature size because the additional nouns that we use in the second step has lens to lambda and here I want to bring attention to an independent and a concurrent work hiding in plain sight a memory tight proves via randomness programming by Gosho, Gosho, Yeager and Tessero and they studied the problem of getting a memory tight MSUFCMA secure signature schemes via black box reductions. Their construction is similar to ours in the second step they also use random nonces but their approach is completely different from ours and we want to send our acknowledgements to the four authors of this paper for spotting a gap in the proof of our main theory. We closed this gap in the full version of our paper by introducing a new property for canonical reductions and with a refined analysis. We also want to send our acknowledgements to the anonymous reviewers of Azure Crypt 2021 for insightful and helpful comments and finally let me summarize our work. We propose a generic approach in getting memory tight MSUFCMA security digital signatures and we instantiate our approach and get three digital signatures with a fully tight MSUFCMA security and our results do not conflict with the impossibility results by Alba Hart Hall or Wang Hart Hall. We can cook with this impossibility results because we focus on a special kind of non-black box reductions which we call the canonical reductions. A limitation of our work is that all the memory tight signature scheme we considered regardless of fully tight or not are proven in the random article model. So we think getting a memory tight multi-challenge secure digital signature scheme in the standard model is an interesting open problem. If you are interested in our paper you can find the full version of this paper on ePrint via the link shown here. You can find the slides on the Azure Crypt 2021 website and if you have any questions about our work feel free to send us emails or you can raise your question in the interactive session of Azure Crypt 2021 conference and we will try our best to answer them. And that's all for this talk. Thank you very much for your time and attention.