 And this talk was shown with my students, Rumao Chen and also my colleges, Wili Sushilu, Guominyang and Yimu, we are from University of Wulonggong, Australia. But now the second author is now working at National University Defense and Technology China. Let me quickly revisit the signature definition and its SQL model. A signature scheme is composed of four aggregates in our definition. System parameter generation, key generation, signing and verification algorithms. In this work, we use capital, we use the capital sigma m to denote a signature or message m. In the standard SQL model for digital signature is known as existential unforgability against children's message attacks. In this SQL model, the challenge of first generate system parameters and private key to the adversary. Then the adversary can definitely choose message for their signature queries. At the end, the adversary wins the game if it can output a new, valid for signatures on new message denoted by m start. In the corresponding SQL deduction, we are going to solve a computational problem. So the simulator first use a problem instance to simulate the system parameter, private key and all curate signatures to the adversary. And we are used the first signature to solve a computational hard problem. In the perfect signature simulations or signatures in the simulation can be classified into two sets, simulatable and reducible. And a signature is simulatable if it can be computed by the simulator because it's computable, such a signature cannot be reduced to solving a hard problem. And the second type is reducible. A signature is reducible if the simulator can use it to solve a hard problem because it's reducible. Such a signature cannot be computed by the simulator because if it can be computed by the simulator, means the simulator can solve the hard problem without the help of the adversary. So in a successful SQL deduction for digital signatures, it requires that all curate signatures must be simulatable and the forged signature must be reducible. This is the essential conditions for a successful SQL deduction. And in a SQL deduction, we use an adverse attack on a proposed scheme to solve a computational hard problem. And there we have a time cost denoted by capital T and a loss factor denoted by error. And we call it a deduction. There is many associated with the loss factor. A deduction is tight if error is small or constant and a deduction is loose if error is linear in the number of curate, such as the hard curate or the signature curate. And this deduction is not good enough because we have to increase the length of the security parameter to compensate the security loss. So an inherent question is how to achieve tighter reduction for digital signatures. In the literature, we use a random sort of art denoted by art here in the signature generation. Supposed that the space of the random number is art, we try to program the simulation in the way that for each message, okay, for each message, the random space R can be spread into two sets. We call it a simulatable space and reducible space. And a signature or message M using R is simulatable if R is children from the simulatable space or it will be reducible if R is children from the reducible space. So to achieve tighter reduction for all curate signatures made by the adversary, the simulator try to pick R from the simulatable space such that there's no abortion during the signature curate. So we have the security reduction is successful if the false signature is reducible. In this case, and we have the probability of successful reduction is dependent on the size of such a reducible space. And for example, the reduction is tied with loss factor two when the two sides, the reducible space and the simulator space are the same. And the condition of this kind of approach for tighter reduction must use random sorts in the signature generation. This is the condition. However, none or signature scheme allow to use a random sort. For example, unique signatures. Unique signature is a very special digital signature scheme. Roughly speaking, if sigma M and sigma M prime are both valid signatures of M, then they must be identical in the unique signature definition. So in the corresponding signature generation, we cannot use a random sort in these kinds of signature generation because a random sort will produce a distinct signature. Therefore, the approach using a random sort for the signature generation, sorry, using a random sort for tighter reduction is not suitable for unique signatures. It means that we cannot use such an approach to achieve tighter reduction for a unique signature scheme. And suppose these kinds of tighter deductions for unique signatures are the chargers that we are looking for. And this highway is the currently non-secure deduction we can program. So the chargers at the end of this highway and the question is, can we reach that and find the chargers? Sorry, in the past 15 years, there are three stop signs placed in this highway. So in that it's impossible to reach that and the corresponding result will publish in EuroCrypt 2002, Pikachu 2012 and EuroCrypt last year. All these three works shows that any secure deduction for a unique signature or its generalization called an efficiently regeneratable signature scheme must have the lost factor error. An error are not the same in three works, but the up lower bound is QS. QS here is the number of signature QS made by the adversary. This is the long result in previous three works. However, we found we can reach the trade by a, we can bypass, okay, just bypass this stop sign and reach the trade by a very tricky pass with the help of Groundhog. So with Groundhog here, Groundhog is a new secure deduction for digital signatures. We call it Curie-based deduction. In the secure deduction for digital signatures, I mean in the traditional secure deduction for digital signatures, if the simulator use a forged signature to solve, use a forged signature made by a computer from the adversary to solve an underlying hard problem. We call this is forged-based deduction. While in the Curie-based deduction, the simulator will use hard securities made by the adversary to solve an underlying hard problem. The only difference is the way of solving an underlying hard problem. So the first one is the forged-based deduction and here is Curie-based deduction. Actually, Curie-based deduction is not complete new secure deduction because we have already used it to prove security for encryption scheme in this Tingersburg security model and the computational hardiness assumption. But I cannot find any work that will use this kind of Curie-based deduction for digital signatures. Let me give you a simple example to explain this kind of Curie-based deduction. Suppose the system parameter is the parent group. Parent group, including a cryptography hash function whose output space is the parent group G and the public key is G to alpha and the secret key is alpha. A signature here, a signature of a message actually is composed of two BRS signatures here. The first BRS signature is the signature on message M and the second BRS signature is the signature on M that concatenate the first BRS signature, okay? This is the constructed signature. Suppose in a secure deduction, if alpha is A and HM, I mean in the random order model, M, the HGQ on M is responding using G to B, then we have the first signature is G to AB. So if the adversaries want to forge the signature on message M, the adversary must make the HGQ on M and M concatenate the first signature to the random order to complete the signature forgery. Otherwise it's in, otherwise the adversary cannot have a non-negative advantage to forge the full signature. So we have the shows and G to AB. We appear in HGQ because this one, we have been made it by the adversary. Actually, our signature scheme, which is a type of deduction, is quite similar to that construction. The system parameter and the public key are the same. And in our full signature scheme, each message, sorry, each signature is composed of M plus one block signatures. Here, we call each one as a block signature. For example, this one, we use sigma i to denotes the first i block signatures. This one, the second signature is M concatenate the first block signature. And the second one is, with this one, sigma M2, it denotes the first two block signatures. For this kind of signature scheme, we can prove that the security loss is only 100 for N equal to 25, even Q is as large as two to 50. In the following presentation, I'm going to show that the security reduction for our simplified scheme with N equal to two, we have loss of five to this one, two times the square root of Q for Q HGQ is made by the adversary. In this kind of simplified scheme, each signature we are composed of three block signatures. So before the introduction, let me give you some preliminaries about security reduction. We try to define three types of hash securities made by the adversary, type zero, type one and type two. How to distinguish zero, one, two? Zero means the hash secure input has message only, okay? Has message only without block signatures. And type one means the hash secure input has a message concatenating its first block signature, okay? And type two means the hash secure input has a message concatenating the first two block signatures. Of course, the adversary could make our signature query not in these three types, but it's okay because it's not related to, not used in the security reduction. And these are the three types of hash securities definition. Then we also have, there are four cases of hash security could be made by the adversary on message before they are signature queries. For example, in the first case, the adversary that we chose the message M1, then goes to its signature queries without making any hash queries. In the second case, the adversary chose message M2 make its type zero hash queries, then goes to its signature query. In the third case, the adversary chose message M3, make its type zero, type one hash queries, then goes to its signature query. In the last case, the adversary that we chose message M4 make all its three types of hash queries, then go to its signature queries. And for the message to be forged, for example, M5, to forge the message M5, the adversary must make all three types of hash queries first before the signature forgery. For example, the adversary could make the following hash queries before they are signature queries or signature forgery. Here, signature queries or signature forgery are very important because we try, this one is used to make sure all these hash query inputs are computed by the adversary, okay? And our scheme is constructed in that way because we want to make sure that, this is the purpose we want to have, we want to make sure that for the same message and the adversary must make its type zero hash query first, then it's type one hash query, then it's type two hash query because type one hash queries must be able to compute the signature of type zero hash value first and type two must be able to compute the signature of type one hash value first, okay? This is some kind of purpose we want to have or we want to force the adversary to make hash queries in this kind of sequential way. And in a secure deduction, the hash query numbers made by the adversary satisfy this kind of range. First, the number of type two hash query is at this one because the adversary must, because we assume the adversary can forge a signature, so there must have at least one type two hash query made by the adversary. And because we assume that we define the number of hash query made by the adversary is Q, so the type two query is less than Q, okay? But here the most important thing is that the number of type one hash queries is unknown. This one will be totally or identically defined and decided by the adversary. This is the code of our deduction. The property is G to A. We try to reduce to the city's assumption and A is in the city assumption and so alpha is A. And in a secure deduction, only one hash query will be responding using G to B and mostly important, this query is not randomly chosen, okay? I will introduce how to choose this query. And all other queries will be responding using G to Z for a known Z chosen by the simulator, such that the corresponding block signature is simulatable or computable. And in this kind of deduction, the simulator will use a type one or a type two hash queries made by the adversary to find the solution to the city at the problem and which type is dependent on the security deduction. But I want to emphasize that whether it's type one, hash query or type two hash query, this one doesn't know how to be related to the forge, related to the forge signature or related to the message M star to be forged, okay? So in details, if a type two hash query for a message M is responding using G to B, then we have a type one hash query for the same message, contains the city edge, contain the city edge solution because the type one hash queries contains the first block signature and this block signature is the city edge solution, okay? Similarly, if a type one hash, sorry, sorry. If a type one hash query for message M is responding using G to B, then a type two hash query for the same message contains the city edge solution because the type two hash query here contains two block signatures and the second block signature actually is the solution to the city edge problem. So these are the high-level description of our security deduction. Let's have a detailed look. Suppose the adversary can forge a signature and we know this one is the condition. We know the adversary will make type one hash query no more than the square root of Q. So firstly, the simulator will randomly choose K from the set, one, two, three, two, square root of Q, which should be secret to the adversary. Then the simulator will wait for the case type one hash query from the adversary and respond it to this query using G to B because this one is the condition or type two hash query must be made after they are type one hash queries on the same message and type one queries is no more than the square root of Q. So we have any type two hash query is for the message with the probability one over the square root of Q. According to the forge assumption, one type two hash query must be made by the adversary for signature for G. So we have the hash queries contains the city edge solution with the probability one over the square root of Q. For the second case, suppose the adversary will forge a signature and we know it will make type one hash query more than this kind of number, square root of Q. Well, for this one, we need to change the security action. Let K be randomly chose from the set, one, two, three, two, Q. Here is Q, which is also a secret to the adversary. And then, that means that we waste for the case type zero. Here is type zero queries from the adversary and we are responding to this query using G to B. Similarly, all type one queries must be made after they are type zero queries on the same message. So, and the type zero query is no more than Q. So we have any type one query is for the message and with the probability one over Q, okay? And because according to this assumption, more than a square root of Q type one queries will be made by the adversary. So we have one of type one hash queries is for the message and with the probability, this one is probably the number of type one queries times the probability one over Q. In this case, the hash queries contains the city edge solution with the probability one over square root of Q. So, we have, suppose the adversary can forge a signature and if we know the adversary, we make type one hash queries less than or more than the square root of Q. We can program the security detection differently, okay? We program the security detection differently, such that the success probability is always one over square root of Q. We don't know how many type one hash queries will be made by the adversary because it's a natively decided by the adversary, but we can guess the range correctly, less than or more than with the probability one over two. So therefore, we have this final success of probability. This is the minimum success probability. And this completes the description of the security detection. So, what's the gap between the impossibility and the impossibility? First, I would like to emphasize that the impossibility proof in the previous three books are not wrong, okay? They silently assume that all hash queries are efficiently computable because all hash queries are efficiently computable so only the forge signature can be reduced to solving a hard problem. Right, in this given example, we try to define some very special hash queries like the type one and type two hash queries. These hash queries are not inefficiently computable because they contain block signatures. And block signature is hard, it's inefficient for the adversary to compute without the corresponding security. And then we use the hash queries to solve the underlying hard problem. This is the gap between the impossibility and the possibility. And this work implies, this kind of construction implies a generic construction of a signature scheme with a, we can transform a signature scheme with a loose reduction to a signature scheme with a tighter reduction. And suppose sigma m is with a loose reduction, we can construct a signature scheme with a tighter reduction following or define the proposed structure, signature structure in this way. And this kind of signature scheme has a tighter reduction in the general model when the reduction is programmed as this. One of block signatures is programmed as reducible following our approach. And all the other block signatures will be programmed as simulatable. In this way, we can have a tighter reduction. But the condition is that this one must be in the random local morning. Okay, we propose the first unique signature scheme with a tighter reduction. This kind of work bypass the impossibility given in this three conference. This kind of construction also implies a generic approach for a tighter reduction without a random salt in a signature generation. The condition is that our scheme construction and this kind of transformation must be in the random local model. So with that. Okay, about the brief story of this work. I also believe the impossibility results is impossible prior to this work. And this kind of curi-based direction, this kind of idea came to my mind at 3AM in July of last year. But I found that this kind of counter example was very hard to construct. And so we have finally the first scheme on our scheme was for successfully construct in December last year. So it took us a very long time to find this kind of counter example. Although it seems easier for me now. That's why we spent lots of time in finding the reason why we can bypass these kinds of impossibilities. So about this work, we would like to thank Yannick being the shop of this work and Deepo for helping identify the gap between the impossibility and possibility and also the review of Capital 2017. Yeah, that's all. Thanks.