 Thanks for the introduction. So I will be talking about an interactive zero-knowledge for MP from learning with errors This is a joint work with Chris. Okay, so we start the talk with reviewing the definition of zero-knowledge protocols in 1985 Goldwasser Mikhaili and Rakoff Defined introduced zero-knowledge protocols. These are basically interactive protocols between a prover and the verifier And the goal is that the prover wants to convince the verifier that a certain statement is true that a certain string X is in a certain language L and we need these Protocols to have two properties one is soundness which means that if the statement is wrong if X is not in L Then a cheating prover should not be able to fool the verifier. The second requirement is zero-knowledge which says that Okay, we said that We said that the after the protocol ends the verifier learns nothing beyond the fact that X is at X is in L so as soon after zero-knowledge protocols were Were defined in 1986 Goldrich and Mikhaili and Wigderson showed that if one-way functions exist Then every NP language has a zero-knowledge a system In 1988 a balloon the Sanctis Mikhaili and Perciano Considered the notion of non-interactive zero-knowledge The non-interactive zero-knowledge or need the key for short is a zero our zero-knowledge systems with without any interaction the Protocol just consists of a prover sending a single message to the verifier and Both the prover and the verifier have access to a CRS where CRS either stands for Common references ring or come on a random string so in the common reference a string model prior to the start of the protocol there is Trusted setup phase where which produces this CRS. However in the common random string model the CRS can be any random string and We prefer the common random string model because we often have publicly available Sources of randomness such as like the lottery number. So in the common random string model. We can avoid the trusted setup phase And need the key systems have found plenty of applications For instance, they can be used to achieve CCA security They can be used to build advanced signatures such as group and ring signatures And most recently they have been used in the context of a cryptocurrencies So we have Many various Constructions of need the key systems for all MP languages based on different assumptions here. I list these assumptions So the first assumption is trapped or permutations. This is by the same BDMP work and the fake Lapidot Shamir work the second assumption is The second assumption is the hardness of certain problems in pairing friendly groups. This is by growth, Osteros, Gansahai The third assumption is indistinguishability obfuscation. This is by Sahai Waters The fourth assumption is optimal hardness of an ad hoc variant of search LWE This is by a recent work of Kennedy, Chen, Holmogren, Lombardi, Rothblom, Rothblom, and Vicks And the last assumption which is the most related to this work is circular secure fully homomorphic encryption This is by the same CCS plus 19 work. So if you look at this list of assumptions You see that one category of assumptions is missing In fact this list does not contain any construction from a standard that is assumptions and in particular the popular LWE assumption which was introduced by reggae in 2005 and Building need the key systems for all of MP has been a longer sending open problem To the point that there was even a bounty for constructing constructing it through the years there have been many attempts and partial results towards this goal and finally in this In this work, we finally resolve this problem and we show that Assuming LWE every NP language has a need the key system. So this is the main result in this work So so we build the first need the key system for all of MP from a standard lattice assumptions This is in fact the first need the key system for all of MP from any assumption with worst case hardness guarantee It is based on the worst case hardness Hardness of approximating certain short vector problems within on on lattices to within polynomial factors Just like GOS and CCH plus 19 our need the key system can be instantiated in two modes in the first mode Which is in the common random string model. We get a statistically zero knowledge argument Which means that it is sound against computationally bounded provers and zero knowledge against computationally unbounded verifiers in the second mode, which is in the common reference string model It is a computationally zero knowledge knowledge proof Which means that it is sound against unbounded provers and zero knowledge against bounded verifiers Furthermore both of these two instantiations have compact CRS Which means that the size of the CRS is just a polynomial in the security parameter It's independent of the size of the statement independent of the size of the witness independent of the size of the circuit verifying the NP relation So now we will discuss the prior Pay prior works which heavily influenced our paper so we get our result by building on top of a recent line of work which Soundly instantiates the fear chamber transform in the standard in the standard model. So the fear chamber transform takes a Public coin interactive protocol for instance the this zero knowledge Protocol where the prover first sends the the first message alpha to the verifier the verifier Sends a challenge better of which is a random string to the prover and the prover sends the final Message gamma to the verifier the fear chamber transform takes this protocol interactive protocol and Converse it to a non interactive protocol in the following way It puts in the CRS a description of a hash function And now the single message that the prover sends to the verifier has two components The first component is the same alpha that the prover sends in the interactive version Now the prover can implicitly compute the challenge by applying the hash function to to alpha and then the prover completes the proof by Producing the appropriate gamma for this challenge and Alpha so upon receiving this proof the verifier can compute the Challenge by applying the hash function to alpha and then the verifier proceeds to verify the proof just as before now for this transformed Protocol showing that it is zero knowledge is easy But the main challenge is that it's showing that it Preserves soundness so this is because the prover has offline access to the hash function and it can differ try different alphas to cook up some alpha which hashes in some fortunate way for him which and allows it allows it to to complete to successfully complete His proof so this is so this is some of the reason that I showing Soundness is a challenge now Cth plus 19 Overcomes this challenge and build new system for all of MP by using a hash family with a special property called called Correlation interactability for circuits so correlation interact ability is a the property which was first First defined by Kennedy Goldrich and Hallevi for slightly different purposes This is plus 19 use correlation interact ability for circuits to get their result So definition of we see the definition of the definition of correlation interactivity here So we say that the hash function is correlation interactable for circuits for a circuit class if for any circuit in that circuit class Given a correctly sampled Given a correctly sampled hashy no polynomial time adversary Can find an input x such that hash of x equals c of x this is the definition of correlation interact ability for circuits And in the next slide we will show how cc h plus 19 use use Correlation interact ability for circuits to get their result Okay so Cc h plus 19 instantiates the fear chamber transform with correlation interactable CI hash function for circuits so in particular it puts the key of a CI hash function in the CRS Now if we have a cheating prover this cheating prover in order to cheat in order to Prove a wrong a statement prove as prove a statement for an X which is exit Which is not in the language L then this cheating prover must find an alpha Such that the challenge the hash of alpha is a bad challenge a bad challenge is one of the rare challenges which Allow the prover to complete his proof. It is rare because the underlying protocol It is rare because the underlying protocol has found this and therefore only a negligible fraction of the challenges The challenges let the prover complete a bad proof complete a wrong a proof for a wrong a statement So this is the task of the prover however the other also Okay Okay, so this is plus 19 also showed that if public encryption exists Then we can we can build zero knowledge systems for all of NP where the bad challenge is a function of the first message alpha And this function can be represented by a polynomial inside circuits So if you combine these two requirements if you look at these two requirements The prover in order to cheat has to find an alpha such that hash of alpha is a bad challenge But but this bad challenge is C of alpha for a circuit C So the the cheating prover in order to cheat has to break the correlation interactivity of this hash function Now from this description We we can say that given a public given public key encryption and see I hash function for circuits We can get needy key for all of NP for the first component for public key encryption We have known for a long time how to build it for from LWE, but for the second component for see I hash functions CCH plus 19 build the CCI hash function for circuits from any circular secure FHE Unfortunately, although we have plain FHE from LWE, but we do not know how to build a circular secure FHE From a standard lattice assumptions and this is the main reason that CCH plus 19 cannot rely on LWE or standard lattice assumptions Our main contribution in this work is that we build correlation interactable hash function for all circuits From a standard lattice assumptions from either SIS or LWE in this talk, we will focus on the SIS construction just Recall that SIS stands for short integer solution, which is a Which is a computational problem defined by ITI in 1996 So it predates LWE and it's potentially weaker than LWE LWE implies SIS So So our construction makes heavy use of fully homomorphic commitments and in particular the fully homomorphic commitments of Gorbanoff, Icuntanathan and Bix which are based on the which is based on the fully homomorphic encryption scheme of Gentry, Sahay and Waters So these fully homomorphic commitments are commitments with homomorphic capabilities. We use these two algorithms commit and come and evolve with the following property So if you have a commitment to a circuit and a string X or a commitment to a string X Then we can use this eval algorithm to homomorphically evaluate a Commitment to C of X. Very similar to fully homomorphic encryption, but in the context of fully in the context of commitments So now let's now we will describe our construction In our construction the hash key is a Commitment to a dummy circuit D This D is a dummy circuit which maps Which maps big L bit Big L bit strings to small L bit outputs Then to evaluate the hash function on an input X First this hash key is homomorphically evaluated on the input X So we get the C which is a commitment to D of X and then we convert this commitment to D of X we converted to C bar, which is something that we call an inert commitment so we call we name it inert because We name it an inert commitment because it is a commitment to D of X But it has no longer any homomorphic property. It has no longer homomorphic property so so the name inert commitment and One more thing to notice here is that this C bar is L bit long is an L bit string and D of X is also an L bit string So this C bar and D of X Are the same size inert commitments are the same size as the thing that they are committing to now I have to we have to see what are these inert commitments How does this inert if I algorithm? How does this transformation works and how do these things help us in getting correlation interact ability? Before that, it's very useful to compare our construction with CCS plus 19 Our construction is very similar to CCS plus 19 the CCS plus 19 however uses the instantiates the commitment scheme in an extractable mode so they do have The their commitment scheme has an encryption algorithm and decryption has the encryption algorithm and decryption keys and The hash key in their commitment has also an additional component this additional component This additional component is a commitment to the complement of the decryption function So this additional component this commitment to the complement of the decryption function is the main reason that they need Circular security because they are committing to something that depends on the decryption key So so so the need for circular security now to evaluate the hash function CCS plus 19 proceeds similarly to To evaluate to compute C to compute a commitment to D of X But then uses this commitment to D of X with the commitment to the complement of the decryption function to homomorphically evaluate Commitment to a commitment to the complement of the decryption of D of X and this would be the output of their hash function then they proceed to show that this this their function is their function is their hash function is correlation interactable by by Describing a diagonalization argument now back to our own construction. We need to show correlation Interactability we need to we need to show that for any C It is hard to find an X such that an inner commitment to D of X equals V of X where D is the dummy circuit but remember that These commitment schemes are hiding so a commitment to a dummy circuit is indistinguishable from a commitment To the to the circuit see itself so for so we can replace a d with C in that requirement for correlation Interactability, which means that now for correlation interactability. We need to show that for any C It is hard to find an input X such that an inert commitment to C of X equals C of X So to show this to show to prove this requirement To satisfy this requirement we we state the main property of our inert commitment the main property is as follows The main property is as follows if we have if we have Okay, so the main property that follows it says the main property says that If you multiply G, which is the gadget make me chance your pie cards gadget matrix by the inner commitment to a string V Then the result is a which is a uniformly random matrix in the public parameters of our commitment a scheme times commitment coins are which is a non-zero short vector plus G V so this is the main the main property of our inert the main property of inert commitments to see how this Gives a correlation interact ability assume that we have inert commitment of V equals V Then we can multiply both sides by G as a result we get a R plus G V equals G V Cancel we cancel G V from both sides this implies that commitment coins are would be a solution to SIS Problem assuming SIS our construction is correlation intractable now, of course, I have to describe how how this inert if I algorithm works, how can I how I can Transform a commitment a normal commitment to an inert commitment if you are familiar with GSW and GVW works then you know that the Normal commitment to a string V has the following format it is a R Where R is a short matrix plus V transpose tensor G where G is again that me chance your pie card gadget matrix I want to transform it to an inert commitment to V Which have the following form it is G inverse of AR where? small r is again short vector plus G V and for this for this purpose To show how this transformation happened. I state the main property of I state I state the key equation Which can be which is a straightforward to verify so the equation is as follows If for any matrix M if I multiply a commitment to V by G inverse of Vectorization of M so I multiply AR plus V transpose tensor G by G inverse of vectorization of M then the result is AR small R where this Small R vector R is R times G inverse of vectorization of M And it is short because R is short and G inverse is always short plus MV now in this equation replace M with matrix G and You get what you need for the inert if I algorithm So this is This is the construction We will I will finish the talk with a stating a few open problems So the first open problem is the problem of constructing Non-interactive witness indistinguishable system for MP from LWP you mentioned that recently there were three interesting and independent works works which used our CI hash functions to build two round witness indistinguishability witness indistinguishable systems for MP So we asked whether we can improve it or build non-interactive witness indistinguishable system for MP This is the first question the next question is that can we build a Statistically sound and an interactive zero-knowledge system from LWP in the common random string model So right now the construction in our paper and the statistically sound music a construction in our paper is in the common reference String model we asked whether we can get the same result in the in the preferable common random string model for the third question is Can we build multi-torem a statistical zero-knowledge? Niziki systems for MP from LWP. So right now the bear construction in our paper is Is only a thing single torem if you apply the generic or trick Then what you get is a multi-torem system, but only computationally zero-knowledge multi-torem Niziki system so we asked whether we can get multi-torem a statistically zero-knowledge system The last question is can we enhance the efficiency or CI hash functions? That's it. Thanks for your attention So we have time for a question or two if you have a question, please come to the microphone Okay, so I will ask a question So commitments that can so this zero-knowledge proof system is based on the LWP problem and commitments can be built from a simpler assumptions like SIS is there Any way to get fully homomorphic Commitments or some other relative primitive directly from SIS without resorting to LWP Actually, we get the fully homomorphic commitments from SIS But the reason that we need LWP is for the underlying zero-knowledge protocol So that's that's why we need LWP So we'll let's thanks the speaker again