 Ciao a tutti, io sono Federico Pintore e sono andato a presentare un lavoro joint con Ali Alcaffarani e Shuichi Katsumata, dove proposiamo lo C.C. Fisch e un schimbio di segnazione efficace con una reduzione tiepida al C.C. 512. Il titolo del talk illustra la nostra contribuzione. Introduziamo un nuovo schimbio di segnazione di segnazione lo C.C. Fisch che è sicuro sotto l'assumptione di un problema decisivo del C.C. 512. Il schimbio è sicuro in il modello classico e il modello di Oracola Quantum Ranu, ma il feature più relevante è che lo C.C. Fisch attiene queste proprietà, mentre essere quasi efficienti come il C.C. Fisch. Il sito di segnazione è semplicemente the same, i pubblici sono sempre più grandi come i posti del C.C. Fisch e anche il signo e il verifico sono la più soltanto lo C.C. Fisch. In tutto il resto del talk, cercherò di dare un'overview di come lo C.C. Fisch attiene le proprietà che sono abbiate. Spoiler, introduzionando un nuovo protocolo di segnazione lo C.C. Fisch. Allora iniziamo con una breve overview su la relazione tra la criptografia isogini e i signatori digitali. Questa relazione è stata scoperta recentemente, perché i problemi di problemi isogini sono quasi illusivi ad usare per costruire i signatori digitali. Il paper del 2011 con Giao e Defeo, che ha introdotto il protocolo di segnazione lo C.C. Fisch e coincide con il burro della criptografia isogini, contiene un protocolo di segnazione, ma non ha un mentiono di segnature digitali. I signatori isogini i primi erano proposti 6 anni più tardi nel 2017, ma anche la migliore variante optimista produza segnature di strumenti a meno di 12 kilobyte. Poi nel 2019 il C.C. Fisch era proposto e un po' più tardi, Defeo e Galbraith pubblicano il C.C. Fisch basato su un paradigma del C.C. Fisch. Proprio le segnature di strumenti a meno di 12 kilobyte, segnature generazioni e verificazioni sono abbastanza lenti. Finalmente, l'ultimo anno, un progetto del C.C. Fisch chiamato C. Fisch era proposto. C. Fisch piacerebbe l'efficienza pratica in qualsiasi segnamento e verificazione, mentre mantenendo le strumenti segnature offerte da C.C. Fisch. Nel luogo di above, possiamo sicuramente affermare che C. Fisch è la prima segnature di strumenti pratici isogini, ma questa vita recentemente ideale tra iisogini e le segnature è fragile. Inoltre, C. Fisch è specifica a un set di parametri C.C., ovviamente i parametri C.C. 512. Perciò, C. Fisch può affermare la stessa sicurità provvedente da problemi di problemi di hard matematica, G.I.P, su i parametri C.C. 512. G.I.P è considerato a avere 128 biti di sicurità classica e a più di 64 biti di sicurità quanti. Tuttavia, C. Fisch è basato sulla paradigma di Fiat Shamir e, come ogni signature di Fiat Shamir, C. Fisch ha una reduzione molto lossa både nel modello random oracle e nel 1 quantum. Per essere più precise, se Lambda rappresenta i biti di sicurità di problemi di hard matematica, poi la signature digitale non può affermare più biti di sicurità che Lambda minus il log 2 di il numero di strumenti al modello random oracle, tutto diventato da 2. Tuttavia, un problemi di hard offering 128 biti di sicurità classica assumendo un modello di 2 a 40 strumenti non può affermare più di 44 biti di sicurità per la signature digitale. In un modello random oracle quantico, la situazione è più forte. Normalmente, la lossa di reduzione è assorbita dal problema hard incrementando i parametri, ma in questo caso la situazione è diversa, perché C. Fisch risponde a un problemi di hard su un specifico set di parametri, C. 512. Per risolvere questo issue, una reduzione tight è necessaria. Il progetto per il quale viene la prossima è l'ultimo. Iniziarò a ricordare cosa il protocollo di lossa di identificazione è, poi scriverò il nostro protocollo di lossa di identificazione e mi raccomando perché una reduzione tight può essere derivata da un protocollo di lossa di identificazione. Finalmente, concluderò di discutere la sicurità concreta e l'efficienza di Fisch di lossa di identificazione. Il protocollo di lossa di identificazione è primariamente un protocollo di identificazione. Let R be a polynomial computable binary relation on the Cartesian product of two finite sets X and Y. Then an identification protocol ID for R is an interactive protocol between a prover and a verifier composed by four probabilistic polynomial time algorithms. I gen to generate statement witness pairs, P1 and P2 run by the prover and V run by the verifier. Informally the goal is that of making the prover prove to the verifier that given a statement X in X they possess a valid witness W without revealing anything more than the fact they know W. Here we specify that W is a valid witness if the pair XW belongs to R. The protocol is free move so we have a prover and a verifier. The prover produces a commitment running P1. The verifier uniformly samples a challenge. The prover runs P2 obtaining a response. The verifier then runs the algorithm V which is deterministic and the output is either accept one or reject zero. The properties usually required to an identification protocol are correctness, honest verifier zero knowledge, high mean entropy, perfect unique response which is not a standard property but it's useful in our contest and our protocol enjoys that property which states that we've overwhelming probability over the statement witness pair output by I gen for any commitment and challenge there exists a unique response such that the commitment, the challenge and the response form a valid transcript for the statement output by I gen. Finally, we have two special soundness which informally states that a cheating prover can cheat in at most one challenge. Now, the question is what makes an identification protocol a lossy protocol. Firstly, the protocol has another algorithm called lossy I gen which produces lossy statements so they are in the set X but not necessarily in the language of R. These lossy statements must be indistinguishable from statements in the language. Then in terms of properties instead of two special soundness we require the statistical lossy soundness which is formalized by means of an interactive game between an adversary A and the challenger. The adversary is given a lossy statement, chooses a commitment, receives a uniform challenge from the challenger and has to output a response that together with the commitment and the challenge must form a valid transcript for the lossy statement. We want the probability of A winning the game epsilon LS to be negligible. As we already said, the lossy I gen must produce lossy statements and the advantage of an adversary indistinguishing a valid statement and the lossy one must be negligible. This completes the definition of lossy identification protocol. Now let's jump to our seaside based lossy identification protocol. Let's start by fixing the notation using the bare minimum amount of maths. We start from a finite abilient group G and the finite set X. We assume that the group G acts freely and transitively on X hence there is a map star which satisfies the first two properties on the right. For the freedom and transitivity of the action we want that if we fix an element in G, the map from X to X it induces is a bijection. On top of that, for cryptographic purposes, we want the action to be able to be efficiently computed but on the other hand, given G star X, it must be hard to recover G. Questo problema è chiamato group action inverse probleme. Let me just note that in seaside the group G is the ideal glass group of an order O while X is the set of super singular ellipt curves of a prime field F for which the endomorphisms of a Fp form a ring is homomorphic to the order O. Anyway, we can safely skip these details and just bear in mind that elements of X are ellipt curves. Now we restrict to the case where the structure of G is known and it is cyclic of order N with G as a generator. This is the case for the group G obtained from the seaside 512 parameters hence the case of sea fish. Now that we have fixed the notation, we can state the hard problem on which we rely. We named it the seasonal seaside, this seaside in short. The problem consists in distinguishing between the two following distributions. The first one has two uniformly random curves E and H and then G to A star E and G to A star H where A is uniformly random in ZN and so G to A is uniformly random in G. The second distribution is composed by four uniformly random curves. In order to present our loss identification scheme I'll start with that of sea fish. There the public parameters correspond to a prime P, a generator G of the group G, the order N of G and a fixed ellipt curve E0 in X. The binary relation R sea fish is composed by pairs EA where E is an ellipt curve in X and A is such that G to A star E0 is equal to E. Hence we have a prover and a verifier and the prover has a pair EA where A is the secret witness while E is the public statement. Then the interaction between the prover and the verifier goes as follows. The prover uniformly samples R in ZN and computes the commitment G to R star E0. The verifier uniformly samples a challenge bit. The prover responds with R if the challenge is 0, hence the blue path with A minus R in the case the challenge is 1, hence the green path. The algorithm V checks if the challenge is 0 that G to the response star E0 is equal to the commitment otherwise that G to the response star the commitment is equal to E which is the public statement. Now the key point is that this identification scheme does not admit lossy keys since each statement in X has a corresponding witness. This is due to the fact that the action star is free and transitive. So in order to obtain a lossy scheme we have to change something and our modification is pretty clean. Instead of considering one starting curve E0 as part of the public parameters we split it into two starting curves E01 and E02 which are now part of the statement. Then the same element A of ZN is used to compute two other curves E11 and E12. These four curves form the statement A is the witness. So in picture we split E0 into two curves. The action of the random element G2B sends E0 in E01, the action of the random element G2C sends E0 in E02. Now it's as if the previous graph was mirrored. Consequently the commitment is now composed by two parts COM1 and COM2. The challenge and the response remain the same but now also the workload for the verifier is doubled. Since if the challenge is 0 it has to verify that COM i is equal to G2 the response star E0i for i equal to 1 and 2. If the challenge is 1 it has to verify that E1i is equal to G2 the response star COM i for i equal to 1 and 2 as well. It's easy to prove that the described scheme satisfies correctness, honest verifier zero knowledge, high mean entropy and perfect unique response. Furthermore it has statistical lossy soundness. More precisely the advantage epsilon Ls of an adversary A in the lossy impersonation game is equal to one half plus one over two times n. I will go back to statistical lossy soundness in the following slides but before that it's important to observe that the protocol has indistinguishability of lossy statements. Indeed a real statement is composed by E01 and E02 that by construction are uniformly random and then G2A star E01 and G2A star E02. On the other hand a lossy statement is just a tuple of four uniformly random ellipt curves and these two tuples coincide with those of the two distributions in the DC side problem which we assumed to be hard. So this completes the description of our lossy identification scheme. We still have to address why so important to have a lossy identification protocol to obtain a Fiat-Shamir signature scheme with tight security. We recall that the Fiat-Shamir transform turns an identification protocol into an un interactive one and in turn into a digital signature. The trick is pretty simple. Instead of obtaining the challenge from the verifier the prover computes it as the digest of a hash function H on input the commitment and the message. The hash function in this model has a random oracle in the security proof of the digital signature which satisfies existential affordability thanks to the properties satisfied by the underlying identification protocol. Under some hypothesis the security proof can also be given in the quantum random oracle model. The reason why lossy identification protocols are important for having a tight security proof is the following theorem by Kielz, Lubaszewski and Schaffner. Let ID be a lossy identification protocol satisfying all the properties we have introduced so far. Correctness, Honest verified zero knowledge, it has alphabets of mean entropy, perfect unique response and epsilon LS statistical lossy soundness. Then the advantage of an adversary A against the stronger forgeability game is bounded by the advantage of an adversary B against the distinguishing of lossy statements. Plus epsilon LS times 1 plus the number of queries to the random oracle made by A plus 2 to minus alpha plus 1 where alpha is the mean entropy plus the advantage of an adversary D against the pseudorandom function used to derandomize the Fiat-Shamir signature. Note that the only difference between the case where H is modeled as a classical random oracle e the case where H is modeled as a quantum random oracle is in the factor containing QH. In the classical setting this term is linear, in the quantum setting it is quadratic and it is multiplied by a factor equal to 8. Now what's the moral of this result? Well if epsilon LS true to minus alpha plus 1 and the advantage against the PRF are small enough the security of the signature scheme tightly adheres to that of the hard problem on which the statistical indistinguishability relies. In our case the decisional seaside problem. At this stage a natural observation would be well the epsilon LS of your scheme is not really small since it is equal to one half plus one over two times n. This is true but it is typical to make the lossy soundness epsilon LS negligibly small by standard parallel repetitions of the identification protocol. Specifically on input a per statement witness the proofer runs parallel executions of the protocol. T parallel rounds make epsilon LS equal to one over two to t plus one over n. However standard parallel repetitions may be problematic for the efficiency of the digital signature scheme. So we have another option to make epsilon LS small which is decreased true. Indeed VET2 derives from the cardinality of the challenge space which contains 0 and 1. We adapted the tricks introduced in C-Sine and C-Fish to enlarge the challenge space of our lossy identification protocol. The result is a new binary relation and a new hard problem with a reduction from the seasonal seaside. In our original relation the statement contains a couple of starting curves E01 and E02 and then a couple of arriving ellipt curves E11 and E12 obtained with the same witness A. In the variant that enlarges the challenge space a statement still contains a couple of starting curves E01 and E02 but then it is composed by S pairs of arriving ellipt curves E11 and E12 till ES1 and ES2. The witness is now composed by S elements in Zn A1 A2 until AS. Under this modification the lossy soundness epsilon LS is given by the following expression where the 2 times S plus 1 at the denominator of the first term is the cardinality of the new challenge space considering also the use of quadratic twist as suggested in the C-Fish paper. This ends the focus on the importance of having a lossy identification protocol. To conclude we put everything together to discuss security and efficiency of the lossy C-Fish digital signature scheme. We use the result by Kils, Lubaszewski and Schaffner to estimate the security of lossy C-Fish that is the digital signature obtained applying the Fiat-Shammin transform to our lossy identification protocol with enlarged challenge space. For the classical security the following inequality holds. We are interested in the bits of security of our scheme. So we have gamma bits of security if where does not exist an adversary that breaks the scheme with success ratio bigger than 2 to minus gamma, where the success ratio is the quotient between the adversary's success probability and its running time. The enlargement of the challenge space changes the hardness assumption for the lossy indistinguishability. In particular the first term in the right side becomes s times the advantage in solving the DC side problem. So we obtain v's. Since the best known algorithm for solving the decisional C side is the one solving type, assuming a running time for the adversary b equal to 2 to 128, its advantage is one. Since the statement of the theorem holds for the running times of adversaries a, b, d, being equal, we divide each term by 2 to 128. For the second term on the right side we grant the adversary a at most 2 to 128 queries to the random oracle. And as done for C-Fish we consider a hash function which is a factor 2 to u slower than a standard hash function, as for example Shafri. By plugging in the value of epsilon ls and doing some approximations that's what we obtain. Now the idea is to consider distinct possible values for s and u and for each of them determine the value of t giving the biggest security level. This table contains the results and also a comparison with C-Fish in terms of public his sides. We note that s is always of the form 2 to minus w, hence by biggest security level we mean 128 minus w minus 1. This means that the bigger the value of s, the smaller the security level obtained, but we note that for s equal to 1 for example we only lose one bit of the security provided by the Gaip problem over seaside 512 parameters. For the quantum security the computations are pretty much the same. The best known quantum algorithm for the Gaip problem is Cooper-Berger's algorithm for the hidden shift problem, which has a subesponential complicity. However the concrete security estimates are still an active area of research. So we considered 56 bits of quantum security as a conservative choice and 64 bits as a more optimistic choice. We bounded the number of queries to the random oracle accordingly and the results are contained in this table. Finally for the computational costs we observe that they are dominated by the computation of class group actions. For the key generation we need 2 times s plus 2 of m, while for signing and verifying we need 2 times s of m. The comparison with those required by sea fish shows that we can safely deduce that our scheme is at most twice as low as sea fish. For concrete estimates of the running time we considered two tuples of values of s, t and u respectively. These two tuples offer a small signature size and a small sum of signature and public size respectively. The numbers reported for these tuples suggest that with sea fish and low sea fish isogenic-based signatures have entered the realm of practicality, but they still need to gain efficiency. That's all from me, thanks for your attention.