 And the next talk will be given online. It's about optimal tightness for chain-based unique signatures, and the speaker is Fuchun Guo. Hello everyone, my name is Fuchun Guo, and I'm presenting from Australia. The title of our paper is Optimal Titanis for Chain-Faced Unique Signatures. This work is joined with Wili Susilu. We are both from the University of Wili. In this work, we focus on this question, how to program a tender reduction for a signature scheme when signatures are unique, meaning without using random numbers in signature generations. Security deduction is a popular method for security proof. And security deduction, suppose there is an adversary who can break a scheme, we reduce breaking the scheme to solving a hard problem. A security deduction is tied if an adversary can break the scheme with time-cost T and probability Y, and we can solve the hard problem with a very close time-cost and probability. A tighter deduction guarantees that breaking a scheme is as difficult as solving a hard problem. Digital signatures are fundamental primitives in modern cryptography. How to construct a signature scheme with a tighter deduction receives lots of attention in the digital region. Our community has also invented many intelligent methods for tighter security proof. With the efforts from our community, it's no longer hard to obtain tighter deductions if it's in a weak security model or if it's under an interactive hardiness assumption or adversary with restrictions in computations like HEM. The research question is narrowed down to how to have a tighter deduction under an interactive assumption in standard scheme model against general adversaries. Nowadays, it's also not hard to achieve tighter deductions with these three factors as long as signatures are randomized. In the standard scheme model, we must simulate some signatures for the adversary before we receive a false signature. The high-level idea of tighter deductions using the randomized approach works at least. Since there are at least two valid signatures for each message, we program the simulation in the way left. The other signatures is simulatable, means can be simulated, and the other signatures are reducible, means can be reduced to solving hard problems. Then when the adversary curates signatures of a message MI, we return a simulatable signature for it. Then there's no abort in signature query and the false signature is reducible with high probability. The consequent question is how to have a tighter deduction with these three factors, but without using the randomized approach. This question is interesting because unique signatures are special signatures where each message has only one valid signature, such as the well-known BLS signature scheme proposed 20 years ago. The randomized approach cannot be applied to have tighter deductions for unique signatures because there's only one valid signature for each message. The consequent thinking is, is it possible or not to have a tighter deduction for a unique signature scheme? Well, it seems impossible if all simulations have a common feature as this. An adversary can choose some message, make basic hard queries before signature queries and signature projects such that the signature on each message is simulatable or reducible, and this kind of result cannot be changed by the simulator. It's impossible because the adversary can attack in this way. The adversary will first pick a message and make queries such that all signatures are either simulatable or reducible. And the adversary will pick Q random of learn for signature queries and forge the signature on the last message denoted by M-style. If there are two signatures are reducible, then definitely the simulation will not be successful because Q signatures must be simulatable. If there's only one signature reducible, the probability is at most one over Q because of some kinds of random choice by the adversary. There have been many excellent proofs with metal reduction showing that unique signatures or their generalizations cannot have success probability more than one over Q in the standard supreme model. And it's also possible to achieve standard reduction with the other four co-authors show how to do this in crypto 17. The proposed signature scheme is called chain-based construction. Each unique signature is composed of n block signatures, no case sigma 1 to sigma n. The key point is the signing structure like a blockchain. The block signatures capital sigma m0 are treated as message and sign to attempt capital second and one. And capital sigma m1 are treated as message and sign again to obtain the sigma m2. In the skew reduction for this signature scheme, an adversary can still choose message and make hash queries such that signatures on each message is still either simulatable or reducible. The simulator has already solved hard problems from hash queries in general model. So how can the simulator solve the hard problems with hash queries? In the skew proof, each signature query, each signature request n different hash queries called from type 0, type 1, type Q to type n minus 1. Before the signature query on message n, the adversary should make type 0 query first, then type 1 and then type 2 sequentially. This is because of the chain structure. The adversary must compute locate sigma i by itself for type i query, which is sometimes computationally hard without knowing the secret key. So given the CDS permissions, the prover can set the secret key equal to a and program the response to type i minus 1 query with G2B. Learned the type i query from the adversary will contain locate sigma i, which is the solution to the CDS problem. The challenge of title production is that we don't know how many queries for each message the adversary will query before a signature query. The number ki for message mi is determined decided by the adversary. Of course, if the adversary would like to forge the signature of message m start, it must make all n queries on m start. An important finding in crypto 17 is that crypto ki to be the number of all type i queries generated by adversary. No matter how the adversary curious, there must be just a special integer i star, such that q i star and q i star plus one is very close. It means that the rate is as small as one over qh to power one over n and qh is the number of partial queries. With this important finding, the prover can choose one of type i star queries and respond to it with G2B and then the CDS solution will appear in one of type i star plus one queries with a very high probability, which is equal to this. Currently, there's only one method for proving patterns of unique changes, which was appeared in crypto 17 and the reduction loss is this which is locked tight. And this, the contribution in this work, we first show that the optimal optimal loss is q to power one over n and q is the number of signature queries. And then we show how to obtain such an optimal reduction. We introduced the second contribution first. Our proof for chain based beer scheme works as this without changing the scheme. Okay. Having the CDH performance, we said our secret key equal to a key point is that we not uniformly choose an integer c for the range. We also choose a zero to m minus one for each message. Then we plan to send the response to type I type C curious as the change curious I could see, and the response has the G to be an other curious, normal curious without G to be in the response. And suppose the adverse makes type zero type one to type K of M before signature curious, we have these kinds of results. If K is less than C, it means that the at the grocery has not yet curing the type C curing a message and, and that we can change, or we just set all curate to normal curious, then we can simulate the signatures. We have to change here. And if K C, we have to abort because we cannot simulate the signature, but if case larger than C, then the heart promises has already appeared in type C plus one curious. We consider the simple case that the at the grocery we make one signature curious before the signature for three. Okay. Now, some big has appeared here. The success probability is equal to that, the probability of solving haplen before signature curious, plus something haplen from the fortune signature, when there's no success or not abort. This signature curious phase. Okay, so PS, I, one, you know, the software program before signature curious and PF, you know the failure property, due to signature curious. The key question is how to have a high success probability. This page is the most important showing the key solution now work. Here PR is the, is the probability. If PS star is close and no more than one over two, and the gap between PF one and PS one is constant and small, for example, as small as one of the queue, then after one signature curious, the success probability is slightly reduced only. Okay, one minus one of a queue, we can extend this to queue signature curious, not lost eventually is the constant as more. Okay, but how to achieve this, we found this can be achieved with geometric progression. Because PFI is equal to one over two to project and PSI is equal to the sum of all values on the left side of one over two project, then no matter what J is the gap between PFI and PSI is always equal to one over two to power and plus one, and we can set this very very close to one of the queue. The sum of all values is close and no more than one over two. So I'll prove what as it follows for each message and there will be n types of harshy curious. So we chose a very special C not uniformly. And G to be will be embedded in response to type security with a different probability from one over two to one plus one to one over two to two. Suppose the other way we are making type zero type one type accurate on match M before signature curious that we will have with different results. Previously, so we have this important probability. We have PFI when C to power B is embedded in response to type Ki curious on message MI and we have PSI G to be is embedded in response to any type. See I as though I see is the best that Ki, okay. Ki is adaptive each other by adverse a yes but no matter what Ki is we have the gap of this two probability is always equal to one over two to one plus one. And for the four teenagers, the other version must make all curious and PSI is it is very close to one over two. And this is the main idea of our type of discussion. With the above approach, we can prove that the chain based beer scheme we have the deduction loss four times Q to power one of N, and this loss is constant and small when N is locked in Q. Okay, and next we show that this kind of deduction loss must be at least Q to power one of N. We use the framework or meta deduction by a current to analyze these kinds of optimal loss. So we first should construct a special hyper static adversary. And then we need to simulate this kind of hyper static adversary via rewinding. And if we can efficiently simulate this kind of adversary with error probability if some E, let the R work break the hardness of some with if some R minus if some E. The meta deduction shows that if some R cannot be large than if some E, otherwise we can run R as an oracle to break the hardness assumption. The challenge of this kind of optimal N is is how to construct this special hyper static adversary and how to simulate this kind of adversary with error probability as the designated value. So this is the hyper static adversary attacking as follows a set of message M0s are children's some subset of gender messages will be also children satisfying and my has this number of message. This message size setting is to let the error probability the same, no matter how our program deduction. Okay. So for the hyper static adversary we are first make all type zero curious or message in M0. The anniversary will later make all type one curious or message in M1. But before this, the anniversary will make some curious or message in M0 excluding M1. type curious is this and actually contains a signature on message and stuff. Because there's no signature curious on on message and stuff so the other way can return this as the fortune to simulate such a hyper static adversary, the main difficulty is how to simulate or type I curious in that color. Because this has to contain some block signatures and they are hard to be computed without having a speaking. And this problem is solved with the winding and it requires and most and times of winding, taking a T1 and one as an example. So, the, the state, which we just received that is going to have security type zeroed, then before the rewind. We will make signature curious or message in M1. And then we rewind to rewind to the state after type your curious. We can make all signature curious on on message in M0 excluding M1. If that's not a part by R, then we will be able to use the signatures before the rewind to simulate the type one curious in M1. The last is to calculate the error probability. The error occurs when there is an integer I dash such that before the rewind cannot respond to the curious means that the simulate adversary doesn't know have the signatures. And after rewind, the R can respond to this curious. It means that the simulate adversary has to continue to make the following type I dash type I dash curious or message in M I dash. Okay. And the error probability is equal to this two event. When I dash is equal to one. We calculated the error probability based on the setting about how many signatures are reducible in M0. Okay. And no matter what N is, there's no arrow or the arrow is less than one over Q to power one of N. And based on this information and the results, we can calculate that the final error probability is equal to is this which is the optimal lost. And this is the high level idea of our analyst conclusion. Okay. The idea of a type of Russian Wednesday just I'm unique. It's not trivial under any line interactive hiding assumption in standard skill model against general adversary. Currently, the only known title reduction for chain based construction has reduction loss and times this QH to power one of N, and we prove that the optimal loss is actually is Q to power one of N. To show how to obtain such an optimal optimal reduction with completely different approach. We would like to thank people younger for insightful discussion on the first version of this work in 2020, and we would also like to say the anonymous anonymous reviewers from EC 21 22 and crypto 21 for they are very important and useful comments. Thank you. So if there are any questions, we'll try to ask them remotely and let's hope everything will work for the technical sites. Are there any questions? No questions. Well, if not, then let's thank the speaker again.