 Hello everyone, my name is Frucheng Guo, and I'm presenting from Australia. The title of our paper is Optimal Titanis for Chain-Faced Unique Symmetries. This work is joined with Wili Susilu, we are both from the University of Wulonggou. In this work, we focus on this question, how to program a tighter deduction for a signature scheme when signatures are unique, meaning without using random numbers in signature generations. A secure deduction is a popular method for secure truth, and secure deduction supports ledges and adversaries who can break a scheme, while reduce breaking the scheme to solving a hard problem. A secure deduction is tied if an adversary can break the scheme with time cost T and probability Y, and we can solve the hard problem with a very close time cost and probability. A tighter deduction guarantees that breaking a scheme is as difficult as solving a hard problem. Digital signatures are fundamental primitives in modern cryptography. How to construct a signature scheme with a tighter deduction receives loss of attention in the literature. Our community has also invented many intelligent methods for tighter secure truth. With the efforts from our community, it's no longer hard to obtain tighter deductions. If it's in a weak security model, or it's under an interactive hardiness assumption or an adversary with restrictions in computations like HEM, the research question is narrowed down to how to have a tighter deduction under a non-interactive assumption in standard security model against general adversaries. Nowadays, it's also not hard to achieve tighter deductions with these three factors as long as signatures are randomized. In the standard secure model, we must simulate some signatures for the adversary before we receive a forged signature. The high level idea of tighter deduction using the randomized approach works at least since there are at least two valid signatures for each message. We program the simulation in the way left. One signature is simulatable, means can be simulated, and the other signatures are reducible, means can be reduced to solving hard problems. Then when the adversary curates the signatures of a message MI, we return a simulatable signature for it. Then there's no abort in signature curate, and the forged signature is reducible with high probability. The consequent question is how to have a tighter deduction with these three factors, but without using the randomized approach. This question is interesting because unique signatures are special signatures where each message has only one valid signature, such as the well-known BRS signature scheme proposed 20 years ago. Then randomized approach cannot be applied to have tighter deduction for unique signatures because there's only one valid signature for each message. The consequent thinking is, is it possible or not to have a tighter deduction for a unique signature scheme? Well, it seems impossible if all simulations have a common feature as this. When the adversary can choose some message, make a basic hard to curate, before signature curate and signature forage is such that the signature on each message is simulatable or reducible, and this kind of result cannot be changed by the simulator. It's impossible because the adversary can attack in this way. The adversary will first pick a message and make curate such that all signatures are either simulatable or reducible. Then the adversary will pick Q random of learn for signature curate and forage the signature on the last message denoted by M-style. If there are two signatures are reducible, then definitely the simulation will not be successful because Q signatures must be simulatable. And if there's only one signature reducible, the probability is at most one of a Q because of some kinds of random choice by the adversary. There have been many excellent proofs with meta-reduction showing that unique signatures or their generalizations cannot have success probability more than one of a Q in the standard supreme model. On the other hand, it's also possible to achieve tighter deduction with the other four causes show how to do this in crypto 17. The proposed signature scheme is called chain-based construction. Each unique signature is composed of n block signatures, no case sigma 1 to sigma n. The key point is the signing structure line of blockchain. The block signatures capital sigma m0 are treated as message and sign to obtain the capital sigma m1. And capital sigma m1 are treated as message and sign again to obtain the sigma m2. In the secure deduction for this signature scheme, an adversary can still choose message and make hard to query such that signatures on each message is still either simulatable or reducible. But the simulator has already solved hard problems from hard to queries in general model. So how can the simulator solve the hard problems with hard to query? In the secure proof, each signature query, each signature request n different hard to queries called from type 0, type 1, type 2, to type n minus 1. And before the signature query on message n, the adversary should make type 0 query first, then type 1, and then type 2 sequentially. This is because of the chain structure. The adversary must compute a low case sigma i by itself for type i query, which is sometimes computationally hard without knowing the secret key. So given a CDH problem instance, the prover can set the secret key equal to a and program the response to type i minus 1 query with g to b. Learned the type i query from the adversary will contain low case sigma i, which is the solution to the CDH problem. The challenge of title production is that we don't know how many queries for each message the adversary will query before its signature query. The number k i for message m i is determined decided by the adversary. Of course, if the adversary would like to forge the signature or message m start, it must make all n queries on m start. An important finding in crypto 17 is that that capital Q i to be the number of all type i queries generated by adversary. No matter how the adversary queries, there must be just a special integer i start, such that Q i start and Q i start plus 1 is very close. It means that the rate is as small as 1 over Q h to power 5 over n and Q h is the number of passive queries. With this important finding, the prover can choose one of type i start queries and respond to it with g to b. And then the CDH solution will appear in one of type i start plus 1 queries with a very high probability, which is equal to this. Currently, there's only one method for proving tetanus of unique synges, which was appeared in crypto 17. And the reduction loss is this, which is locked tight at least. The contribution in this work, we first show that the optimal loss is Q to power 1 over n and Q is the number of synges queries. And we show how to obtain such an optimal reduction. We introduce the second contribution first. Our proof for chain-based field scheme works as this without changing the scheme. Having the CDH problem, we set our secret key equal to a. The key point is that we non-uniformly choose an integer c from the range 0 to m minus 1 for each message. Then we plan to send the response to type i type c queries as the change query. I go to c and the response has the g to b. And other queries are normal queries without g to b in the response. OK, then suppose the addressing makes type 0, type 1 to type k of m before signature queries. We have these kinds of results. If k is less than c, it means that the addressing has not yet cured. The type c query or message m, and then we can change or we just set our query to normal queries. Then we can simulate the synges. OK, some kind of change here. And if k is c, we have to abort because we cannot simulate the signature. But if k is larger than c, then the hard problem has already appeared in type c plus 1 queries. We consider the simple case that the addressing will make one signature query before the signature query. OK, now something big has appeared here. The success probability is equal to that the probability of solving hard problem before signature queries plus solving hard problem for the signature when there's no success or not abort in signature query phase. OK, so p s i 1 denotes the problem before signature query and p f denotes the failure probability due to signature queries. The key question is how to have a high success probability. This page is the most important showing the key solution in our work. Here, p r is the probability if p s star is closed and no more than 1 over 2, and the gap between p f 1 and p s 1 is constant and small, for example, as small as 1 over q. Then, after one signature query, the success probability is slightly reduced only, 1 minus 1 over q. We can extend this to q signature queries, not lost eventually, it's near constant and small. OK, but how to achieve this? We found this can be achieved with geometric progression. Suppose p f i is equal to 1 over 2 to the power j, and p s i is equal to the sum of all values on the left side of 1 over 2 to the power j. Then, no matter what j is, the gap between p f i and p s i is always equal to 1 over 2 to the power n plus 1, and we can set this value very close to 1 over q. And the sum of all values is close and no more than 1 over 2. So our proof work as it follows, for each message n, there will be n types of hash queries. Then, we will choose a very specific c, not uniformly, and c to b will be embedded in response to type c query with a different probability from 1 over 2 to n plus 1 to 1 over 2 to 2. Suppose we make type 0, type 1, type k query on message m before signature queries that we will have with different results. We have introduced this previously. So we have this important probability. We have p f i when c to power b is embedded in response to type k i queries on message m i. We have p s i when c to b is embedded in response to any type c i as long as c i is best than k i. k i is negatively chosen by adverse areas, but no matter what k i is, we have the gap of this two probability is always equal to 1 over 2 to n plus 1. And for the four signatures, the other version must make all queries and p s i is very close to 1 over 2. And this is the main idea of our type of deduction. With the above approach, we can prove that the chain-based BLS scheme will have the deduction loss 4 times q to power 1 over n. And this loss is constant and small when n is locked in q. And next, we show that these kinds of deduction loss must be at least q to power 1 over n. We use the framework of meta-reduction by a current to analyze these kinds of optimal loss. We first should construct a special hypersetic adversary. And then we need to simulate these kinds of hypersetic adversaries via rewinding. And if we can efficiently simulate these kinds of adversaries with error probability epsilon e, then r will break the hardness assumption with epsilon r minus epsilon e. The meta-reduction shows that epsilon r cannot be large than epsilon e. Otherwise, we can run r as an oracle to break the hardness assumption. The challenge of these kinds of optimal n is how to construct this special hypersetic adversary and how to simulate these kinds of adversaries with error probability as a designated value. We consider a hypersetic adversary attacking as follows. A set of message m0s are children. Some subset of random messages will be also children. Satisfying m1 has this number of messages. This message size setting is to let the error probability be the same, no matter how r programmed the deduction. The hypersetic adversary, we first make our type 0 curious on message in m0. The adversary will later make our type 1 curious on message in m1. But before this, the adversary will make signature curious on message in m0 excluding m1. The last type of curious is this, and actually it contains signature on message m star. Because there's no signature curious on message m star, so the adversary can return this as the first signature. To simulate such a hypersetic adversary, the main difficulty is how to simulate or type i curious in red color because this hash curious contains some block signatures and they are hard to be computed without having a CP key. And this problem is solved with rewinding and it requires m0s n times of rewinding. Taking a t1 m1 as an example, after the state which we receive the response to hash curious type 0, then before the rewind, we make signature curious on message in m1. And then we rewind to the state after type 0 curious. Then we will make our signature curious on message in m0 excluding m1. If that's not applied by R, then we will be able to use the signature before the rewind to simulate the type 1 curious in m1. The last is to calculate the error probability. The error occurs when there is an integer i dash such that before the rewind, R cannot respond to the curious. Means that the simulated adversary doesn't have the signatures. And after rewind, the R can respond to this curious. It means that the simulated adversary has to continue to make the following type i dash curious on message in m i dash. And the error probability is equal to this two event when i dash is equal to 1. We calculate the error probability based on the setting about how many signatures are reducible in m0. And no matter what n is, there's no error or the error is less than 1 over q to power 1 over n. Based on this information and the result, we can calculate that the final error probability is equal to this, which is the optimal loss. And this is the high level idea of our analysts. Conclusion, okay. How to program a title reduction when signatures are unique? It's line trivial under any line interactive hiding assumption in standard scheme model against general adversary. Currently, the only known title reduction for chain-based construction has reduction loss n times this qh to power 1 over n. And we prove that the optimal loss is actually q to power 1 over n. And we show how to obtain such an optimal reduction with a completely different approach. We would like to thank people at Yaga for insightful discussion on the first version of this work in 2020. And we would also like to shake the anonymous reviewers for EEC 21, 22, and crypto 21 for their very important and useful comments. Thank you.