 Okay, the next talk is zero-knowledge arguments for ready-based PRF and applications to e-cash by Benoit, Sandin, Kowak, Ben, Flushin, Kwan and Kowak will give a talk. Okay, thank you for the introduction. This year, Chagawa put up Benoit, Sandin, and Hassan. Okay, so I'm going to talk about our construction of zero-knowledge, supporting zero-knowledge protocols for PIFs from large assumptions and how to use them to construct e-cash systems. Okay, so this is our talk. So first, I will discuss the application of zero-knowledge for PIFs in privacy-preserving protocols and discuss the problem in the context of lattice. Okay, then I will state our results, discuss our techniques and application before completing the talk. Okay, so the random functions of PIF are deterministic functions that look like two random ones. A PIF would be a p-space k domain d and rank r is a function fk.map d to r. Such that for uniformly random key k and for any input k, the value fk of k should be computationally close to be from our range. Okay, so PIF is a fundamental notion and has applications everywhere in cryptography. So in this talk, we focus only on the application of PIF in privacy-preserving protocols that emphasize on anonymity. Okay, so many such protocols require PIFs with supporting zero-knowledge proof that output y is actually correctly computed for some hidden key k and possibly hidden input k. So hidden k may be committed by these certifiers away. So why we want that? Because we want to design some system where we can protect the anonymity of the users but we can punish them if they abuse anonymity. So such system should allow the user to deterministically generate a random looking value without revealing any information about k and j which may relate to this identity. But if the user does it twice, then anonymity no longer preserves because two sessions will be linkable and there may be some mechanism to detect this and to reveal the identity of the user. Okay, so example of such system would be like case for ring signature, anonymous service, and some anonymous authentication and e-cat system. In the case of e-cat in particular, so every time the user spends a coin, especially he has to generate a PIF value for certified keys, so which is serial numbers. And if the user wants to spend this coin again, then basically the output spending will be easily detected. Okay, so we observe that this kind of protocol has been realized in many number of terrorist assumptions and lead to various applications. But in the database crypto, which is one of the most active topics in the last decade, so the problem is still open. So this mode failed to consider the desire of such a period block in the last seven. Okay, so let me now give a short overview of non-general proof for lattices. So existing non-general proof for lattice valuations belong to two main classes. The first one is Snorlax protocols, which was introduced by Wasevski with the rejection sampling technique. So this class of protocol is quite efficient, but it has a problem called sound effect by a biomechan at crypto last years. This means that the non-extractor in the protocol can only output a vector with norm larger than the norm that you can expect from a valid witness. So that may have some consequences like the protocol have to rely on stronger assumptions than it needed. And sometimes in some advanced protocols, so basically we want the extractor to output binary witness. And in this case, I was using the protocol, the first class seems to be non-general. Okay, so the second class here, Snorlax protocol would use random permutation to prove the validity of witnesses. So this protocol was originally proposed in the context of coding theory, but was adapted to later, adapted to the line, sent in by PowerGNH as a grid 2008. And then it was extended by Lidard arms with a technical decommission and extension, so that can handle relation related to the LWD and SS problems. So protocols in this class are less efficient because each execution of a protocol has a constant sound effect of two or three, but it's more flexible. In the sense that we can easily combine many sub-protocones into one and it has no sound effect. Okay, then we can build a lot of interesting sub-puts with the combination of standard protocols. Okay, so using the kind of proof server, so we have managed to do something like a proof for the hash function and convolutional encryption signature accumulator branching program based on the 6R therapy problem of the real version. First of all, we can prove that we have knowledge of the pre-image of the hash functions, or prove the knowledge of the valid opening of the commitment, prove that the cyber-tech the waveform, prove that we now have a very basic signature pair for standard models in nature, something like this, and even for proving that we have a valid network correctly accumulated to a number three accumulators. And SMI, for example, Fabrice Mohansen talked on Monday, so standard protocol can even use for proving the correct evaluation of branching programs, or hidden branching programs. Okay, so this protocol will lead to many interesting applications like phasom signature, logistic side ring and group signatures, building blocks for anonymous credentials, group invasions, I'd actually oblige transfer with assessment drawing and so on. Okay, so the common strategy for both of the class of protocols is to express the underlying relations as the equation of the form of public matrix multiplied by the secret vector and equal public vector modulus sum Q. Okay, so where the coefficients of the secret vector hash models are in case of standard protocol, it may certify some space of constraint also. Okay, so now let me go to that space, BRF. So the BRF are related to a very, very problem by between Scriban and Jepangat and Rosent in your picture. This is a deterministic version of R2B. So first we have to define what the R2B function gives for these problems. So let M be some integers and Q and P are two moduli where Q is Lash and P. And we identify ZQ at Z from 0 to Q minus 1 and similarly for ZP. Now we define a rounding function that maps ZQ to ZM to ZP to ZM where for X is a vector X, we map to, it's a Y that is computed as follows. We multiply X by P over Q, write it down and then reduce moduli P. Okay, so the learning programming assumption says that the rounding of a magic A transpose time vector S is to random if a magic A is uniformly random in ZQ to ZM and that S is uniformly random in ZQ to ZM. Okay, so there are several words that show that if the R2B assumption holds, then R2B also holds for certain parameter testing. And even that, the assumption already directly gives a PRG that maps vector S to the rounding of a J transpose time S. Okay, so if the PRG is lid doubling, then we can build a PIF by the famous GTM construction. Besides it also led to various kinds of PIF like one by by by the same paper, the same paper, but based on the synthesizers, and one by Bonnie Aetron and crypto 13, by Benadry and Packard, and 14, and Dosebin, and Schroeder, and 15, and so on. Okay, so now let us discuss the non-triviality of design in general proofs for large-based PIF. First, consider the PIF by Bonnie Aetron. The function gives a binary public matrix B0 and B1 in co-dimension m times m, a pk in gq to gm, and it maps an input chain, a binary input chain of the L to vector y in the gp to gm that computed as follows. First, we map the bits of chain to the product of matrices PjL, PjL minus 1, and so on to Pj1, and then multiply this product by k, and then perform the routing function. Okay, so we observe that proven knowledge of the secret k and the secret input chain is using the no-triviality that I discussed is quite ambiguous, because we call that the common strategy in our technique is to review the relation to the form of public matrices times secret vector equal to public vector more or less something. But here actually the matrix PjI, that is the non-public atom, is hidden within the set PjL and B1, we don't know which. And the secret k is also not small, and it's also a positive combative answer. Moreover, we don't even have a modern linear equation, and the digital equation is not really compatible with what we know. Okay, so let me now state our results and our techniques. So we introduce yellow-node angles for correct evaluation of line-based DIFs. Meaning that we call that fk of chain equal to y, where the key k and good chains are secret, and possibly committed or certified as some first third party. And the output y, maybe some given vector, all can be extended to the case where it is hidden and satisfy some additional constraints. Specifically, we have ten protocols with communication cost, soft O of lambda times L, where lambda is the security parameter and L is the input plan of the PIF. For two PIFs, the first one is the PIF of BLMR. This PIF has an additional property that is key homo-feedback. Actually, we don't need this property in the application. And using this similar technique, we can obtain yellow-node support in yellow-node proof of the ZRIP PIF obtained from the everywhere-based PIF via the GTM methodology. Okay, so to demonstrate the usefulness of the Z-Protocol, we give an application, which is the first compacted gas system from Laticis. So, this can be considered as the last analog of the first compacted gas by Kamenys Hohenberg-Ellis-Anskaya in 2005. Okay, so I will discuss the notion of compacted gas later. And in the process, we come up with a general sunlight element for very wide class operation that may be of independent interest. Okay, so let me discuss our techniques. Okay, so to handle the relation underlying the PIFs, basically we first have to deal with the routing relations. Suppose that we want to prove that we know some secret vector X that routed to a given Y. Well, this X maybe may satisfy some other property. Okay, so what can we do? So this relation, it's naturally not normally here, but we observe that one knows vector X in 0 to Q minus 1 to Tm, such that this routing function is correct if and only if one can compute X and Z with the same range, such that P times X equal to Q times Y plus Z modulo P Q. Okay, so basically here we recover the vector Z that was eliminated during the routing operation. So this simply simple observation is actually quite useful. It says that it can transform the routing relation to a modulo-array relation with secret vector X and Z which are small, relatively, to the modulo speed from Q. Okay, so that is exactly the form that we are familiar with. Okay, so we can apply a notice like the decomposition and extension technique by Litteron. Okay, so for this, we let M bar to be M times log Q and H be a matrix that allows to decompose vectors in 0 to Q minus 1 to Tm into binary vector of length M bar. Okay, so now we have X equal to much H times the decompose vector T into X and the same for Z and Z into L. Okay, so now that is the decomposition step. So now we will do the extension step. Okay, so the ground here, okay, so we start with some vector that is not small. Okay, so this norm can be at P at Q minus 1. So now we already have binary vectors. Okay, so we want to use some stern-like techniques. So in this case, we have to do some manipulation on the X-tingle and Z-tingle so that it has like the fixed number of fixed hamming weights so that we can apply some permutation using stern-like techniques. So for this, we define the set B M bar to be the set of all binary vector of length 2 M bar with exactly hamming weights M bar. And then we can append suitable coordinates to this X-tingle and Z-tingle to obtain X-hatch and Z-hatch data elements of this set. Okay, so at the same time, we can extend the public matrix H or PX such that we can still preserve the equation. So to summarize this process, so basically the problem of proving that X rather than Y had a problem of proving that the knowledge of X-hatch and Z-hatch belonged to set B M bar to such that equation 2 here holds. Okay, so now we can easily use stern-permuting techniques for proving the knowledge of the Z-derived vectors. So we can do a random permutation of 2 M bar elements. Okay, so we send this random permutation X-hatch to the verifier and see that okay, so the permuted vector also belongs to this set B M bar 2 so the verifier should be convinced that the original vector also belongs to this set and it implies that the original vector X has a very 2Y. Okay, so after handling the routing relation, so we discussed how to handle the BF relation. So if you look at this equation of the BIF outputs, basically we have a routing function at the final stage, but before that we have some kind of commutation that involves the secret base of J and the sort of K. And the difficulty here as I discussed earlier is that we don't have a public matrix here. Okay, so to handle these scenarios, we form a segment of our secret vectors X0 to HL such that H0 equal to the key and for each i from 1 to L, X i equal to B of J i times X i minus 1, and finally Y will be the routing HL. Okay, so now we can transform the equations X i equal to P of J i times X i minus 1, into a equation with a public matrix. So it's X i equal to the concatenation of B0 and P1 multiplied with vector T i minus 1, where T i minus 1 is the concatenation of two block vectors, one of them is 0, the other one is exactly X i minus 1. So we can use a similar decomposition technique on the secret vector X i and T i, and we can spread out the area equation by just one equation, mode queue, N1 multiplied W1 equal to U1, where the secret W1 fits some pattern. So recall that for the final step we have the equation Y equal to routing of HL, which also yields another equation, mode PQ, where W2 is here, is correlated to W1. In the sense that, okay, so the base of HL actually appears both system equations. Furthermore, some of our equation required that the K is committed. So we also yield it to another equation with a different modulus. But the known Stern-Lach-Boost-Holley-Atresic equation relates back to Unist modulus. Actually, if we run, if we have many moduli and we run separate protocols for each moduli, so it's not sufficient because we cannot demonstrate the correlation between the secret witnesses. Okay, so to address this, we put a general Stern-Lach protocol relates back to many moduli and where the secret witnesses may simultaneously appear approximate for equation. Okay, so let's evaluate the set in minus 1 0 to 1 to the ZD, where D is the sum of ZDI and ZDI just introduced. And then S be a set such that for each element of this set we can associate with a permutation we call gamma phi of D element such that the two condition holds vector w belongs to this set value even if the permutation of this vector also belongs to this set value. And the second condition is that if w belongs to values and phi is uniform in S then the permutation is uniformly set value. So with this set thing, actually we can obtain a general Stern-Lach protocol for the relation where we give a magic M i, a vector U i we can prove that we have the vector w belong to the set value which is a concatenation of ZW i such that we have education modulus, different modulus holds M i times W i equal to U i. So this Stern-Lach protocol as usual it has perfect completeness and so it has two thirds. Okay, so once we have time so let me go quickly. Okay, so I guess there was proper way of carrying. So in such a system we have three party of bank user and merchants where the user can withdraw a coin from the bank and spend at the merchants. So in the online model so the merchant have to contact the bank for each transaction but in the much more preferable model the offline model merchant can accept payment without interacting with the bank and can deposit the coins at a later point. So such a system should satisfy several requirements but limited means that the bank cannot inform anything about the owner of the coin and the user cannot spend more coins than what we withdraw and if we violate this then there may be a mechanism to identify double spenders and finally the honest user cannot be formally accused of being double spenders. Okay, so compacting cash in a system where the user can withdraw is up to the error coin in a way such that the complexity of all of the protocols is just proportional in error not in 2 to 0 error. So, cabinet secretary want to show that to build an ecosystem with three building blocks a signature efficient protocol that allows signer to size of legal sign commit messages and allow the user to to prove knowledge of a very certified certificate. Also, we need a PRF which is supporting your knowledge proof that is discussed and also double spender identification we can use. Okay. Okay, so to instantiate is it a modular construction from line assumption so we employ a signature efficient protocol from a work being executed last year that allows the user to put a public key PKU to run a two RT protocol with a bank and obtain a wallet consisting of a top of EU K&T where EU is the secret key correspond to the public key K&T are two secret PF keys and J is a heavy counter in a initialized H0 that indicates how many coins we spend. Okay, so a coin will be a top of R, Y, S, Y, T and Pi where R is a transaction-specific information that is vector B to the M Y has serial numbers with the PF evaluation of with respect to the key the first key then some input change the second sorry, Y, T is informed in this way PKU plus the HACPATRD are multiplied with the PF evaluation with respect to the same input and the different K&T Okay, this is called a security tag where HACPATRD is the full-ranked reference function introduced by okay, so Pi non-interactive algorithm now we start Y, S and Y, T are verged and correctly computed it's a okay, so to do it we can inspect all of the relations and then reduce to an instant of the general protocol that I presented and then we obtain an interactive algorithm with full statement and then we can repeat it many times to achieve a principle size error and then you can make it non-directed okay, so now if we use a double span then there are two coins with the same serial numbers but with different transaction info and different security tags and in this case we can compute a different of them and based on the full-ranked different property of this mapping to recover the FDJ and then we can recover the Bobby T.F. user okay, so let me compute the top, so we obtain zero-node algorithm for two left-page BIF with capacity linear in lambda and also linear in L but we could do the same for other BIFs due to their complicated structure so this is one of the open problems and we also obtain a general standard protocol that may be of independent interest and we have application with a left-page compacted gas system but it is in fact a coin inefficient because the coin upsides a sub-all of lambda from L plus lambda square which is also a large hidden factor within the sub-ordination so we consider the problem of designing more efficient gas system from L to C and open interested problem okay, so thank you for your time and your attention okay, thanks thank you all