 The last talk is an efficient literature in the standard model by Julia Malvolta on the big shoulder and Julia will give a talk. Okay, so this talk you might have noticed about the signature, so let me quickly start to show the signature. So the basic idea is that, so this really was introduced by 2001. And the basic idea is that every user is addressed by its own verification key and even a set of verification key, you can take any subset of this verification key which we define as ring and any user that possesses a secret key belonging to this ring and you can sign any arbitrary message on behalf of the ring. The idea for this is that you can sign messages on behalf of a certain group while staying anonymous within this specific group. And this ring can be chosen on the fly and there is no need for a first setup you can just take some key that you don't need to know the secret key of and you just go there as long as you know one of the secret keys inside. And this ring is actually used in a certain real application which is sort of surprising for advanced cryptographic primitives. And in fact Zcoin and Monero are the very core of their architecture implements some different signatures. So we believe that it's an interesting problem to look at. And so let me quickly give you an overview about the security definition of ring signatures for any signature scheme of course you would expect to guarantee some sort of unforgeability. And here I refer to the definition given in the Samuel paper by Bender, Katharine and Morsega, TCC 2006. And I always refer, so they give a hierarchy of notion. Throughout this talk I always refer to the strongest of these notions. Which is considered an E-notion for ring signatures. So the first notion is unforgeability and it's again a game-based definition. So an attacker is given a certain set of keys which are generated honestly and is given access to two oracles. Ring signature oracles where you can specify a certain index for a player, a certain message and a certain ring to sign on behalf of and receive the corresponding signature. And it's also given a corrupt oracle where you can specify a certain index for a security and provide a security. And the adversary is able to break this notion if it produces a valid tuple, R star and star sigma star such that the signature is verified. The ring is honest and by honest it's only composed of the secret key which has not been queried in the corruption oracle. And of course you shouldn't belong to this query of certain messages to make the in-notary. So not much going on here. So perhaps the more interesting notion is the notion of unlimited and here the adversary is provided again with this oracle that you can see signatures on behalf of specific keys and so specific messages. But the interesting notion is the fact that the adversary at some point of the game so specifically whenever he issues this challenge query is given all the randomnesses of the key generates. So in this case this model is the fact that everything is disclosed to the eyes of the adversary. And even in this context the adversary should not be able to distinguish whether a certain signature was performed on behalf of the user i0 or on behalf of the user i1 as long as they both belong to this ring. Okay and here the model is saying that it's not able to guess the randomness on the challenge. Okay so what's the state of the art of ring signatures? We have plenty of free bank of constructions under certain heuristics and as we saw before those include the trusted setup and a random oracle. And under those heuristics are really about the skin to the known and those are actually the ones that are used in practice. Specifically the random oracle they are pretty a lot of skin, they are a lot of efficient and we can get signatures of constant size whereas in the trusted setup so it was previously addressed as the common reference ring model we have sort of efficient skin so they are syntactically efficient and in particular the best instance here is from Bose that out from two years ago it was composed by a 95 pair of patterns. I must mention that we measure the efficiency of ring signatures by two main parameters which are the signature size and essentially the computational overhead of the algorithm to sign and verify. And for this we'll say in this talk I'm going to focus on the first which I'll give you the most important part. So the next question is what about if you don't want to make certain heuristics so what about ring signatures in the standard model? Well there the situation is slightly more disappointing and we have essentially only two schemes which are known to be secure and the first scheme is the one again presented in the period of definition of a paper of pendulum and for workers from 2006 this is essentially a feasibility result because it uses public encryption and generic subs I mean it's fine in terms of assumption and everything is standard but as long as as soon as you try to implement this thing you have to go through the cart reduction because this zero-knowledge proof I use generically so it's really efficient. This is meant to be a feasibility result but not actually. Whereas the work of Chao and co-workers it's in the standard model so it does not have a CRS or a random miracle but it supports only rings of constant size and it's here as a terrible assumption. So giving some sort of construction which is efficient and very efficient I mean usable in practice and at the same time the standard model is still an open problem. So what's our approach? This is our main result and this is the outline of our approach so we start from a pretty which we call signature with renandomizable key and I'm going to explain soon what this is and we use in combination with non-interactive zero-knowledge and this gives us almost immediate hearing signatures in the common reference stream and if we instantiate the non-interactive zero-knowledge for a certain class of statement and with a certain CRS which has nice property which I'm going to detail later then we can actually upgrade the scheme in the standard model to obtain our main result. Here essentially it says that so the instantiation of signature with renandomizable key is from the paper of all finds and kills this is the problem of hash function for those of you who are familiar with the paper and we actually have to modify a little the construction making it slightly less efficient and the identification allows us to be reduced by a lot the statement that we have proved in zero-knowledge making the whole thing way more efficient since that's the actual component of the construction. Okay so let's go out yes so as you can see we have several building blocks so the signature with renandomizable key and non-interactive zero-knowledge so depending on how we instantiate those we get different results right so this piece as I mentioned before is our main result so the slightly modified version of all finds kills plus our non-interactive zero-knowledge that we introduced gives us a scheme with 4 times n plus 3 group element plus 1 integer this is renerged from the scheme here where n is the size of the ring but if and this is secure under the QSDH and the linear knowledge of the exponent assumption which is sort of non-standard so if we want to go back to standard assumption we can instantiate non-interactive zero-knowledge true with another non-interactive sort of with a scheme with a nice CRS which we'll explain later and we obtain something against the standard assumption of the linear maps but we get only a basic notion of ambiguity if again we are settled with the common reference string so we don't care about the standard model then our construction immediately gives us constant size ring signatures by just using recent SNI's from the Europe in 2016 ok so I mentioned before this primitive ring signature with re-randomizable keys so ring signature with re-randomizable keys as the name suggests is essentially a signature scheme equipped with two addition algorithms that allows us to re-randomize the secret key and verification so I thought that if we fit these two algorithms with the same randomness then we can obtain a consistent pair of keys and by consistent I mean if I sign under escape prime I can verify the randomity prime as long as the value of rho is the same and by randomized I mean that a randomized pair of keys statistically distinguishable from a freshly sampled pair of keys this primitive this property was introduced first in a world from ADC 16 of in the context of freezing air-cyclicization was the security notion here there's not much going on this is essentially the standard unforgeability notion except with the small difference that in the signing oracle we are allowed to specify a certain randomness to the secret key and then the message is signed on behalf of this randomized secret key and the winning condition so the winning condition is as you may expect so the message should not be created by the signing oracle and the verification of the two codes should go through so this is our conceptual framework and as I said this is the CRS so this is just the first step and then I'll show you how to operate so the idea is that we use to be able to generate a secret key and verification key pair we use the standard generation algorithm for signals which will be randomized in world keys and then whenever we want to be firing and we want to sign on behalf of the ring say against the secret key what we do is we feed the pre-randomization algorithm with a certain randomness row and we create a certain fresh pair SK prime and VK prime now if we know the row and we know the ring signatures we can actually compute the proof that essentially says that this verification key VK prime is a valid pre-randomization of either one of these key ring so this of course is an interactive zero-knowledge proof so it leaks nothing about the weakness which in this case is I which is the index and row which is the randomness that we use here and then essentially we just need to sign against the verification key VK prime and get the signal prime as our sub-signatures and the final signature is composed by VK prime signal prime and I the verification is of course an item we verify this pair and just we verify the correctness of prime ok now this looks very pretty straightforward the problem is that as we saw in the talk before it seems that of course the non-interactive zero-knowledge inherently requires a common reference stream so how do we get rid of it seems like there is an inherent barrier in this design that does not allow us to upgrade the scheme but our observation is the following we can embed the common reference stream in the verification key of its user and then as long as the common reference stream is nice enough then we can combine it in a certain way so if we assume that the common reference stream is a group element then we can just apply a group operation and obtain a certain common reference stream which is specific for the so that we reiterate through this every verification key is now composed by the verification key and the common reference stream and in order to define a common reference stream for a certain language as here I take a multiplicative notation but I mean it's very slow in notation but I just combine all together the common reference stream so what I meant before when I said the common reference stream has to be nice it has to be nice means that it has to be from an efficiently recognized set and this essentially says that the common reference stream should not have any hidden access structure otherwise the adversary could play with it so remember that the adversary could potentially corrupt one of these keys or even any subset of those so if he's able to choose his common reference stream adaptively then we might have some problem but as long as we can efficiently recognize the set of common reference stream then we are fine, right? so this is one property that we would like to have and the other of course side check property is that the CRS has to be closed under composition because otherwise we cannot perform summation okay so fine the problem is that normally our non-interactism is not as proof work is that you either trap you switch between like simulation and extraction mode so this you pretty much always have a hidden structure in the CRS so with existing scheme this does not really work so our solution was again to design a new scheme which is simple enough and supports only a specific class of languages but it suffices for our purposes and now I'm going to give you a very simplified sloppy description of our scheme so this is just for proving the knowledge of the script log this is actually not going to be used in our scheme but it gives you a good idea of what we are doing so so let me go through it common reference stream is again just one group element of which the script log elements are known so that's all we and note already that this is a very nice common reference stream and we just combine and then again for any common reference stream that we combine by the closure and the composition of the group to get a nice common reference stream again and essentially to prove the knowledge of this X such that gtx is equal to h we first re-landomize the common reference stream and then we raise these re-landomized group elements to be h and that's what our proof is composed of and to verify this proof to the bearings and verify that this would hold in the code what's happening here is essentially we are computing a re-landomized version of the Diffie element tool and the idea is that the only way we could compute the Diffie element tool by knowing from h and t is by knowing x because t was a uniform example so that we supposed to do that this simple scheme is not enough what we would like to prove is the statement of this family in the sense that we would like to prove that we know an X such that gtx is equal to h1 or h2 or so on, hn so this junk is different and then of course the previous scheme does not suffice for this purpose so let's try to see how we can extend this so the basic idea is the same we sample for each element of the tuple we sample a random tj and we compute again the pi which is essentially a Diffie element tuple of ti and hr the point is that for any tj which is not my i I can just compute a particular com reference string which I know the discrete logarithm of so I know the tj except for the i for which I need to enforce the condition that has to have something to do with the original com reference string t so what's happening here is that I'm essentially allowing the prover to cheat in n-1 position without knowing the position by enforcing the following conditions so that all the ti must multiply together with t essentially says that at least one of these ti should contain some information about the t so this means that in order to compute all of this proof there should exist at least one position where I know the discrete logarithm of hi and this is this equation for further details please read the data and again the com reference string is exactly the same as before so it's from an efficiently recognizable set because it's a group element so you can just test in membership of group element of prime order is trivial and it's closed under composition because it's again part of a group part of a group so by definition it's closed under composition so this just works right away so combining this scheme with the variant of HK08 which as I suggested before gives us our new signature scheme in the standard model and to reiterate our signature scheme is 4 times n plus 3 group elements plus 1 so the security of the scheme is proven against the huge strong digital assumption which we inherit from HK08 and the linear of the exponent assumption which we inherit from our knowledge so of course the linear knowledge of the exponent assumption is high on standards and we show that it least holds against genetic attacks again I refer to the paper for the form of treatment on the method so a still open problem is how to compute, so how to instantiate efficiently fully secure in signature scheme which are proven truly secure against the falsifiable assumption so we don't know so I can say that do you need to have the short set up to generally the set up? No, no so the idea is that the CRS is a single group element so I can just embed it in each verification key so whenever each user samples his verification key he can also just sample together a random group element and it's just included in the verification key of the user so now the verification key is the old verification key and a random group element which is the CRS and whenever I want to prove a statement so whenever I want to perform a signature on behalf of this string I define my common reference string as the multiplication of this string so that's essentially the main thing so we don't need to any other questions? I can just yell at them it's okay so you compute this product so does it for example the last guy what if he sees all the previous CRS's can he do some cheating by somehow picking his CRS somehow dependent on the previous one okay that's a good point so short question so short answer no but you would like to know why I guess so the point is that we don't have any specific requirement of the CRS except that it's a group element so that we don't have any requirement it can be any so even if you pin your CRS adaptively with respect to the other CRS of the ring the composition will always be able to so there's no way you get out of the set of valid common reference strings unless you do something creative then it's easy to check how correctly we're forming the CRS actually actually that's that's exactly the problem why we cannot prove full anonymity whenever we use the standard non-interactive rule because the adversary could just pin this common reference string adaptively choosing with respect to the other the other elements of the ring and then anonymity so this essentially says that each user is anonymous only with respect to a ring which is composed exclusively by honest keys and there of course we don't have to go the ring is composed only of honest keys so then we have honest CRSs right? and then you could assume everyone is chosen at random when you multiply them together in valid CRSs and so forth you get the square root so we have soft linear complexity square root and complexity even for this basic case we did really look into this extension so this was more like the corollary of our theorem possibly yes actually you could get something any questions ok, thumbs up and thumbs up