 Okay, so the next part of our session is about HAC-V CPU signatures from FIAMU-ADFIS protocols by ACOPIN, Julian Nos and Yasin Pan, and Julian will read the presentation. Okay, thank you for the introduction. So, consider the following setting. You have a verifier, B, that holds the public key, and Cooper P. Cooper P wants to convince the verifier that it knows the corresponding secret key, SK, to the verifier's public key. And in order to do so, the two parties can run a three-move identification protocol, which I've sketched on this slide here. So, in the first round of this protocol, the prooper will send the commitment R to the verifier, and the verifier then responds to the challenge page. And finally, the third round, the prooper then computes from all of these values, and answer S sends this to the verifier, and the verifier then decides if he wants to accept this proof or if he wants to reject it. So, such three-move protocols are very well understood and have been considered many times in the literature, and this is mainly due to a very important transform called the FIATRMIA transform. So, the FIATRMIA transform takes any identification scheme with three rounds and generically transforms it into a signature scheme which is secure in the random oracle model. And I've listed some notable examples here. So, probably the most well-known signature scheme that can be derived in this manner is the Schnorr signature scheme. But we also have the Katz-Lang signature scheme, Eupus-Vater and Obamoto, and there are also other signature schemes. So, before I continue, I would like to give some motivation for what I'm going to present in the rest of my talk. And the motivation for our work is mainly tight security. So, what is tight security? In order to explain this, maybe let's look at how we would prove security in general for cryptographic scheme. So, suppose we have some cryptographic scheme on the left-hand side, and we have some algorithm A that breaks the security of this cryptographic scheme. Okay, and let's say that this algorithm A has a set of probability epsilon and it runs on time t. The way that we prove security is that we come up with a reduction that takes this algorithm A to some algorithm B that solves some underlying harvest problem, for example, ELOC, LWE, or factory. And the success probability of this algorithm B at the moment here is epsilon prime and it's running time is going to be t prime. Okay, so we will call the reduction tight if roughly epsilon prime is equal to epsilon and t prime is equal to t. So, what does this mean? The reduction preserves the efficiency of the algorithm A when converting it to the algorithm B. So, why do we care about non-tight reductions or tight reductions? Well, the reason is that as soon as we instantiate our cryptographic scheme with concrete parameters, the non-tight reductions will always lead to larger security parameters and this is undesirable. So, whenever we can, we want to prove a tight reduction. For the downside, however, of course it's much harder to prove a tight reduction as compared to non-tight reductions. So, this is the motivation and, well, many works have considered tightness over the past decade and the problem now is that all of the schemes that I showed you in the previous slide which can be derived from three modernification protocols they lose a factor of Q when generically transformed where Q is the number of queries to the random oracle. Okay, so the random oracle is just going to be an abstraction, so any hash function will be modeled as a random oracle in this model and this is just an idealized oracle in which it transforms random queries when we ask it in the query. The reason for this loss is that the PHMIA transform uses or the proof of this security for the PHMIA transform uses the so-called forking memo which basically means that we have to define the adversary. And the question is can we prevent this loss somehow? Again, this has been considered many times in the literature and so far many of the existing works take the approach of basing the signature schemes on strawberry assumptions mainly decisional assumptions. So, the cut-spotting scheme that I showed you before was an example of a signature scheme which is tightly based on PDAH assumption, which is a Decisional Assumption but there's also another example which is the Loss CID scheme framework by Adelaide from Europe 2012. And this Loss CID scheme framework actually inherently requires Decisional Assumptions. Okay, so our new idea now is to obtain tightly secured signatures from surge assumptions not from Decisional Assumptions by looking at five-move schemes rather than three-move schemes. So that's going to be the approach. Okay, so let's look at generic five-move identification scheme. Okay, so it's basically the same thing as in the three-move case, but here we additionally have two rounds. Okay, so there's going to be an additional round of commitments and an additional round of challenges which precede the three rounds that we originally had. Okay, and now I've just noted here this first round commitment as R1, the first round challenge as H1, the second round commitment as R2, the second round challenge as H2, and now the answer of course can be computed from all of these values. And also the verification procedure can also depend on all of these values. Okay, so let's call a transcript a valid tool that can occur as a part of this or when we observe such a round of this five-move identification scheme. So the properties that you properly know from three-move identification schemes now naturally translate to a five-move case. Okay, first we have honest verifiers or knowledge. This means that basically the prover reveals nothing about its witness but it proves the statement and the way that we simulate this is that we have to show that the existence of an efficient simulator that can come up with a valid transcript given just the public key detail. Secondly, we have special soundness here abbreviated as SS and this means that we can recover a secret key if we are given two valid transcripts of the following form. So the transcript should have the first round commitments in common but from then they should diverge. And if we are given two such valid transcripts of this form then we can recover a secret key. Okay, so this work we will give a modular framework via Jamia-based identification schemes and signature schemes and we will present three instantiations of a transform from different parts assumptions. And most importantly all of these instantiations have tied security reductions to search assumptions, which is a new thing. Okay, so here you see our basic PhMIA transform, it's very straightforward. Okay, to sign a message M the sign will pick R1, it's first form commitment from the commitment set and now it will using this hash function H1 which is modeled as a random oracle it will compute the value H1 then it samples a R2 and then it computes in the same fashion H2 and finally using its secret key s it will compute the answer s and output sigma as R1, R2 and s. Of course, verification is straightforward given these three values the entire just verifies what it would do in the run of this protocol. And well, just as a minor remark often it will be more efficient to compute the signature as only H2 and s and this will always be the case where we can compute the first two values from H2 and s. For example, you can do this with the schnoz signature scheme but also with the Kazman signature scheme very well known trick. Okay, so here you have a security diagram which I'm going to walk you through and it relates different security notions for identification schemes and the signature schemes that we can derive from them. So on the left hand side is the so-called pink k o a which stands for parallel impersonation under key only attacks and this is going to be a security notion for identification schemes. We formalize the security notion by a game okay, so the first round of this game the challenger gives a public key to the adversary and the adversary is now allowed to start too many runs of the five-room identification protocol with the challenger and if it's considered successful it can complete even a single one of these runs. That's the game. So now our framework gives transformation which doesn't occur any security loss from identification scheme which satisfies the security notion to a signature scheme derived by the Fiat-Jeanier transform which satisfies the following security notion called the ability under key only attacks the notion that you have in the middle. Okay, so this notion now is a notion for security schemes and this notion the game for security is formalized as follows again the adversary is given a public key and now it may ask as many random oracle queries as it likes to the random oracles and finally it is considered successful it can produce a forgery on any message of its choice. Okay, and now for the last security notion our framework presents a transformation which uses the honest verified zero-knowledge property of the five-round identification scheme to come up with the rightmost notion of affordability on the chosen message attacks which is sort of the standard notion in the literature. In this security game the adversary additionally is given a signature oracle and the signature oracle can be used to get arbitrary signatures of its choice and here the adversary is going to be considered successful if it can produce a forgery on any of the messages that it has not heard because otherwise it would be triggered to produce a message to be observed by the signature. Okay, so this is how these three security notions are related and in fact all of these modular steps also apply to the three-move case as was previously shown by Kils, Mazni and Pan at quick to 2016. Okay, and so if you pay attention then you will realize that I have yet to explain to you where actually for the three-move case we incur this loss in the security reductions because on this slide everything is tight so there is no security loss here. So where does it come from? You see this we have to actually move one step further to the left and consider a further security notion which is even weaker. So this security notion is going to be the non-paralleled version of Ping-Kio-A-Tex. Okay, so what does it mean, non-paralleled? It means that now the adversary is given a public key and it is allowed to start only a single one of the protocol with the challenger and it must complete this one. So it's a weaker security notion than the parallel version. And Kils, Mazni and Pan show in this case, there is an unavoidable loss between these two notions. And even that we did not prove so in our work, we strongly assume that this also holds for the five-move case. Okay, now the problem is that for many three-move electrification schemes it's actually not possible to directly prove the Ping-Kio-A-T security directly in a tight fashion. So we have to take this detour which incurs a loss of fuel. This is the case for many three-move electrification schemes in particular, for example, for Schnau. You prove that it's not possible to show a tight reduction to Ping-Kio-A security. And this means that Schnau, for example, cannot be proven tightly secure. So how does the five-move structure help here? The five-move structure helps because it allows us more flexibility to prevent a challenge in the structure of the protocol and therefore it's easier to prove Ping-Kio-A security tightly in a direct fashion. Okay, and as an example to show you some intuition while this is true the next slide I'm now going to present a simplified version of the Schäberding-Mann scheme from quick to 2005 expressed as a five-move event. So on the next slide I'm going to show you the simplified version of the Schäberding-Mann scheme, as I was saying which is a signature scheme from quick to 2005 and I'm going to express the signature scheme as a five-move identification scheme. So the nice thing about the signature scheme is that it has a tight security reduction to the CDH assumption. Okay, so I sketch this five-move identification scheme here that G be a cyclic group of prime order with generator G and that's just briefly called the protocol in the first round we pick some randomness R that we compute U as V to the R and U to the verifier and now the interesting thing is here that the verifier now picks the challenge H as a group element randomly samples some group element and it sends this H back to the pover Now the pover will compute V as H to the R and Z as H to the X and send these two values back to the verifier. Verifier now responds with some charge C and S is computed as CX plus R send back and the verifier does these two checks which I sketched here and if you don't know then you probably have noticed that it's just two parallel runs of schnoll which are sort of interleaved here let's get here this protocol Okay So why does this give us more than the standard schnoll signature scheme? How can we sort of come up with a solution for the CDH problem given this protocol? So here's the proof idea Say we're given some CDH challenge G to the X, G to the Y and we are asking you G to the XY How do we do it? Well we will take G to the X and we'll embed it in the public key of the verifier and now notice that in the second step the verifier samples a random group element and it sends this group element back to the pover and the idea is now to take G to the Y and hide it inside of this random group element Okay We can do this by just taking G to the Y and raising it to some random A And now on the third step which I've marked in red is a crucial step of the protocol Z is computed as H to the X and H to the X of course is going to multiply X by Y and the exponent and from this the verifier can recover the answer to the CDH challenge and so what I wanted to show here is sort of an intuition what we get by having these two additional rounds because in the three-move case there is no way for the challenger, the verifier to embed a CDH challenge in this random group element age Okay, so for the remainder of my talk I want to talk about efficiency a bit so basically what I'm going to present here is an online version of our transformation So to motivate this note that in the signing step of the Pierre Javier transform that I showed you we compute H1 as calling the random org column R1 again Okay, and this prevents pre-computation of this value because if you're only computed once we have seen it So can we do better? The answer is yes Okay, so what we want to do is we want to be able to compute this value for another M very straightforward, we just don't use it when evaluating the hash function and this gives us a second transformation which we call OFA by E and of course this is not for free, so if we go back to the security diagram we don't lose tightness, but now in this step from UFK0A to UFCMA whereas before we only required that the protocol be HGZK it now also has to be special sound okay, so that's sort of the trade-off that we get here Okay, and using these two transformations we get three instantiations from different hardware assumptions the first one is going to be the CDH by the EMAM scheme that I showed you, but we can also get an instantiation from the short-exponent version of CDH and it's also going to have a tight security reduction this is going to be a somewhat altered version of the GTS from 2006 and it's actually the first version of this protocol which has a tight security reduction to a surge problem, so previously this was only proven under a decision-on-assumption in the velocity framework scheme which I talked about before so the benefit of the scheme mainly is that it allows for a very, very efficient side step okay, and using EPS of ID, we also maintain a tightly secure scheme from the factor-in-assumption okay, so here's a summary of what I was talking about so first of all we get a modular framework of security definitions for signatures that we can derive by the Fiat Jameer transform from identification schemes which have five moves and then I presented two versions of this transform, so the basic transform requires only HVZK, but does not allow pre-computation and the second version does allow for pre-computation but it visually requires special circumstances and the way that we avoid the security loss is by using this five-moved structure of what I showed you to embed computational challenges in the protocol in a more clever way okay, so what are all the questions in this area and well maybe if we have five rounds and it gives us something perhaps we can do more if we have even more rounds I mean that's very straightforward but it would be interesting to look at this maybe for seven rounds we can prove something more, who knows and the second question is of course if we can build five-moved identification schemes in this manner based on lattice assumptions there are either two questions that I'm currently thinking about thank you very much for listening plenty of time for questions can you tell me the reason why it's going to be difficult to create a lattice that works for support? okay, so let's go back to this protocol here okay, so the reason that it's not straightforward is because this structure that I show here it actually it really depends on CDH right so what you're doing here is basically you're proving in the third step that excuse me, you're proving in the final step that Z was computed in some certain way right so that this is very inherent and it's not clear how you would translate this to some instantiation from lattices because they don't give you the same structure but that of course doesn't mean that it cannot be done okay, so I don't know that much about zero-knowledge in lattices but I thought there's the Burechev scheme added that's basically a schnord type of zero-knowledge exactly it doesn't apply to us I mean, as I said it would be very interesting to see if it does but so far I haven't figured it out any other questions or comments? okay, so if there's no so let's take it again