 Hi everyone. Today I'm going to talk about our work where we construct new non-interactive zero knowledge for NP based on travel hash and NPN assumption. Maybe the bottom line and consequence of such a result is the first music for NP from NPN and EDH, which are both standard assumptions that were not known to imply this before. This is a joint work with Svika and Venkata and let's begin. I'm going to start with defining NISIX. So an interactive zero knowledge for an NP language is simply a proof system where both a prover and a verifier have an access to an honestly generated CRS. And the protocol is non-interactive, so it consists of a single message that the prover sends to the verifier. And we ask that the verifier is efficient, of course, but we also want that the prover is efficient given an NP witness W for the statement. And we consider the standard completeness, soundness and zero knowledge properties. So NISIX are interesting by themselves, but they also have lots of applications in crypto, including CCA security, signatures and new applications in the region of the protocol. So since the ultimate goal is constructing NISIX from standard assumptions, let's quickly go through the known approaches from the literature that do that. So the textbook approach is called the hidden bits model, and it can be instantiated using either type of mutations or VRFs or VPARGs. But this approach is a bit limited since the only known constructions of such building blocks are either based on factoring related assumptions or assumptions over bilinear groups. Another way to get NISIX was discovered by Roth, Ostrowski and Sahai, and it works in the setting where we have a bilinear group, and we're assuming some different style assumptions over it. But this approach is very non-generic. It uses properties of such groups, so it doesn't seem that NISIX from new standards assumptions will come from this direction. The last approach I'm going to discuss is an approach where we start with an interactive zero knowledge protocol, and we use the Fiat-Chamille transform to turn it into an uninterrupted protocol. And in order to instantiate the Fiat-Chamille transform in a provably sound manner, we use a primitive or correlation-interactable hash. And a long and nice line of work has eventually led to a construction of correlation-interactable hash under the LWE assumption. So to summarize things up, we have NISIX under factoring related assumptions or assumptions over bilinear groups or LWE. In our paper, we extend the set of standard assumptions from which we have NISIX, and we follow up on this last approach and instantiated assuming a tabular hash, which in particular can be constructed from the edge, and jointly assuming the LWE assumption. So how do we really get NISIX from Fiat-Chamille? So we start with the base protocol, which consists of three messages, and we assume it's a public-point protocol. So in the beginning, the prover sends his first message A, and then the verify replies by a random challenge E, and then the prover sends his third message, and the verify either accepts or rejects. So the Fiat-Chamille transport using a hash function edge looks as follows. So we add a randomly sampled hash key to the CRS, and now the prover computes his first message A as before, but in order to simulate the verify challenge, he uses the hash function. So now the verify challenge is computed as the hash of A under the key which is found in the CRS. Once he computed the first message A under challenge E, he can continue and compute the third message Z, and then he sends the entire transcript to the verifier at once. So it's easy to see that this protocol is interactive, and also that if we start with a public-point protocol, then the non-interactive protocol is also complete. We also know that if the base protocol is honest verifier zero knowledge, then the Fiat-Chamille transform preserves zero knowledge. So the only missing piece in the puzzle here is soundness. So it's not clear that even if the base protocol is sound, that the non-interactive protocol is also sound. And the reason is that the prover has some control over the choice of the verifier's challenge. And in particular, he can choose a first message A such that the verifier's challenge H of A falls inside the soundness error of the protocol. So this leads us to consider a specific kind of base protocols which we call sigma protocols. So sigma protocols are free message public-point honest verifier zero knowledge protocols which also enjoy this unique bad challenge property. And this property basically says that for any first message A, there exists at most one value of E that may possibly allow a cheating prover to cheat. And we call such value of E the bad challenge E. So more formally, a fixing a CRS and X which is not in the language and the first message A, there exists at most one bad challenge E for which there exists a transcript A, E, Z that the verifier accepts. So this property allows us to define the bad challenge function which is defined using a CRS and X and gets as an input a first message A and outputs the corresponding bad challenge E. And now we can claim soundness of the non-interactive protocol as follows. So if we can say that it's hard for the cheating prover to find the first message A such that H of A is the bad challenge, then it's computationally hard for a prover to cheat. Because the only way he can cheat is by choosing the first message such that H of A is that bad challenge corresponding to A. And a hash function that satisfies this hardness requirement is called a correlation intractable hash for the bad challenge function F. Okay, so now we know how to construct any ZX using FHM here and all we need is two ingredients. So first of all we need a sigma protocol to start with and then we need a hash function that is correlation intractable for the class of bad challenge functions corresponding to the sigma protocol. So now we try to instantiate this recipe and the guiding rule that we should keep in mind is the following. The simpler the bad challenge function is, then the easier is the task of constructing correlation intractability for such a class of functions. So our goal is kind of dual here. So first of all, we need to describe the bad challenge function for some sigma protocol as simple as possible. And on the other hand, we need to construct correlation intractability from standard assumptions. So now we can start asking questions. For example, it's not clear how simple can the bad challenge function be and it's not even clear whether it's efficiently computable. And on the other hand, we can ask about constructions for correlation intractability under standard assumptions. So the first work that tries to prove the sounds of the HMM under cryptographic assumptions is this work by Kanate et al. Where they observe that the bad challenge function for a sigma protocol for NP is efficiently recognizable. So if I give you a bad challenge, you can efficiently say that this is a bad challenge. Consequently, the correlation intractability task becomes to construct CI for all efficiently recognizable functions. And they are able to do that under sub-exponential IO and some other strong notion of obfuscation and these assumptions are clearly not standard. There have been many follow-up works, each trying to construct correlation intractability for a different class of functions that is sufficient for NISX. But again, they all do that under exotic assumptions. So for example, some assume something to match IO or fully exponential AKDM security. But very recently, last crypto, or even a bit before, this kind of work led to constructions under standard assumptions. And in particular, this last work by Paikard and Shrihan, they use the observation that the bad challenge function is efficiently computable for some sigma protocol. And therefore, all they need now is correlation intractability for all efficiently computable functions and they can get that under LW. So let's say our goal is to extend this paradigm and base it on other standard assumptions. And there seems to be a barrier here. So notice that these last two works, they consider the complexity of computing the bad challenge function and they say that the bad challenge function is an habitually polynomial time computation. And therefore we need correlation intractability for any polynomial time computation and it seems that we need homomorphism for any polynomial time computation in order to do that. But in other words, we need for homomorphism and we know that we can get that only under LWE, as far as we know of course. What we do differently is we consider the complexity of approximating the bad challenge. And using such an approach, we can show that it is sufficient that we use partial homomorphism. And in particular, we can use homomorphism that we can get from other standard assumptions, such as DDH or QR. So more specifically, we observe that the bad challenge for some sigma protocol for a complete language can be approximated by constant degree polynomials. And therefore it's sufficient now to get correlation tractability for all functions that can be approximated by constant degree polynomials. And this is exactly what we do. And we use Traktor hash and the LPEA assumption. Good. So let's draw a more elaborate comparison between a period work and our work. So again, like I said, period work uses the fact that there exists a sigma protocol for a complete language where the bad challenge is efficiently computable. And therefore a correlation tractability for all efficiently computable functions suffices for music. And they can get such correlation tractability using homomorphism for all efficient functions or for homomorphism in other words, under LWE. So homomorphism is a very vague notion, but you can think about if it's in such a case or fully homomorphic commitments. So at a high level, what they do is they use homomorphism for a function class F, which is in this case, any polynomial term computation to get correlation tractability for F for any polynomial term computation. So again, I will go to start with other standard assumptions in particular assumptions that do not give us full homomorphism. So our starting point is a work also from last year by looking at our when we show how using many standard assumptions, in particular at the age, we can get some form of well structured homomorphism. Again, this is very vague, but the formal abstraction for that was called table hatch. So we get some sort of well structured homomorphism for all constant degree functions. And we show that using well structured homomorphism for such a weak class of functions, we can get a great flexibility for a much stronger class of functions. So other than starting with homomorphism for F, we start with homomorphism for a much weaker class C. But using some structure of the underlying homomorphic primitives, we get relation tractability for the strong class F. And more specifically, we get relation tractability for all relations that can be approximately by constant degree polynomials. And if we want to compare this notion to the usual relation tractability notion, then in the standard relation tractability, we ask that it's hard to find an X such that H of X is equal to the back challenge of F of X. But what we want now is a hash function such that it's hard to find an X for which H of X and F of X are even close to each other in some metric. Okay? So this is a stronger notion, but we're considering a much weaker class of functions, in particular constant degree polynomials. And then we show that using CI for approximate relations, we can get collection tractability for all functions that can be approximately computed by constant degree polynomials. And then we show that under LPM, this is a subdivision in every physics. More specifically, we show a sigma protocol for an NP language in which the back challenge function can be approximated using constant degree polynomials under the LPM assumption, of course. And this is how we get our result. Good. So following this outline, we can divide our results to three parts, and I'm going to go through one after the other. Okay? So let's first define what correlation tractability is. So we say that the hash function H is correlation intractable for a function F. If for any polynomial time adversary, it's hard to win the following game, which we call the correlation tractability game. So the game goes as follows. The challenger first samples a random hash key, K, and then he gives K to the adversary, and the adversary's goal now is to find the correlation with respect to K. Namely, his goal is to find an X, such that H of X is equal to F of X. So H is correlation tractable for F, if no PPT adversary can win this game. And we say that H is a correlation tractable for a function class F, if it is relation tractable for every function in the class. It will be useful at this point to talk about a technique that was used in period work to get CI and this called summer statistical correlation tractability. So we say the following. Assume that for every function F, so given F, we can sample a fake hash key K sub F, which is distinguishable from the real hash key, and further that the CI game when we sample a fake hash key rather than a real hash key is statistically hard to win. So again, we consider the game which is identical to the game we've seen before, but now rather than sampling a real hash key, we sample a fake hash key using the fake generation algorithm. And what does it mean for the game to be statistically hard to win? It basically means that the probability that there exists a correlation between H under the fake hash key and F is negligible. So assuming we have these two properties, then using a standard distinguishability argument, we can claim that H is actually correlation tractability for the entire function class. Good. So let's start with a very trivial case towards our final construction. So we consider the case where our function class F contains only length expanding functions. Specifically, every function in the class takes a small domain and expands it to a large range. And notice that in particular, in such a case, the image of any function F is some small subset of the range. So our candidate for correlation tractability in such a case is a hash function that simply samples a random vector R in the hash key and outputs it regardless of the value of X. So notice that once we fix a hash key, then our hash function is constant. So what happens now? For every function in the class, the probability that there exists a correlation is equal to the probability that R is even in the image of F. So if the image of F is sufficiently small, or more precisely, if it's exponentially sparse in the range, then the probability that R is in the image or equivalently that there exists a correlation is negligible. Good. So this simple construction is actually correlation tractable for expanding functions. And our idea is to impose the same image in the general case, even when our class contains shrinking functions. So when our response contains shrinking functions, we cannot simply sample random R and assume it's going to be outside of the image because the image of the function can be the entire range. So what we do is that we define our hash function H using some other hash function we call H prime. So our hash construction will look as follows. We will use a hash function H prime, and then H of X will be equal to H prime on X, so a random R. So again, R is part of the hash key and is sampled randomly. So what we require from H prime in order for it to be a correlation tractable. So we require that for every function in the class F that exists a fake hash key which is indistinguishable from a real hash key. We require that the correlation function H prime of X under the fake hash key XOR F of X. So I'm calling this the correlation function because it represents the correlation of H prime under the fake hash key and F. We require that this correlation function has exponentially sparse image. And in such a case we say that H prime with a fake hash key and F have a sparse correlation. So assuming we have a sparse correlation between the fake hash function and F, then we can say that with the overwhelming probability we have that a random R is not in the image of this correlation function. And therefore, with a negligible probability that exists a correlation between H and F. And therefore H is a somewhat statistical relation. Good. Now that we have reduced our goal to constructing sparse correlations, I'm going to show you real quick how to do that using trapdoor hash. So like I already said, trapdoor hash allows us to construct a homomorphic encryption scheme with very strong structural properties. So in general, in a homomorphic encryption scheme, we can take an input X and given an encryption of a function F, we can homomorphically evaluate and obtain a ciphertext encrypting F of X. In a homomorphic encryption scheme, which we construct from trapdoor hash, the post evaluation ciphertext can be divided into two parts. One part we call the rate one ciphertext and I'm going to refer to it as the encryption of F of X. And another part is a small hash H of X. And the decryption in trapdoor hash goes as follows. So the first stage takes the hash of X, H of X, and produces a long vector E. And then one can recover F of X simply by exploring E and the encryption of F of X. So the second stage of the decryption is public. So how can we use such an encryption scheme in order to get sparse relations. So I'm going to define the fake hash key for every function F to be the encryption of F, and then given an input X will compute H prime of X simply as the ciphertext or the rate one ciphertext of F of X. Okay, so see we saw the function at prime is simple homomorphic evaluation. So notice that the correctness of the decryption of trapdoor hash, the decryption function is simply this vector E, which is computed only given a small H of X. And if H of X is sufficiently small, then we can say that E is sampled from an exponentially sparse image and therefore the correlation between H prime and F is sparse. So I didn't say what the real hash key is and why is it indistinguishable from the fake hash key, but notice that the fake hash keys are simply the encryption of the functions they correspond to. So the real hash key can be an encryption of any arbitrary fixed function, and we can claim indistinguishability based on the security of the encryption scheme. So good. The work of Dachling et al, we know that we can get such well-structured homomorphism for constant degree functions from many standard assumptions, including DDH for instance. And like I showed you this gives us correlation activity for all constant degree functions, but unfortunately this is insufficient for physics. So now we ask whether we can get something stronger out of our idea of sparse correlations. And remember that we claimed that a random R is with overwhelming probability not in the image of sparse correlation. But we also observe that the overwhelming probability R is actually far away from this image. And almost immediately leads us to defining a stronger notion of correlation interactivity, which we call CIAPX, correlation interactability for approximable relations. So we say that the hash function H is correlation interactability, is correlation interactable for relations approximable by F. If it's hard for adversity, not only to find the correlation, but rather to find an X such that H of X and F of X are close in hamming distance. And we can similarly extend the CI notion to a CI for a class of functions. Good. So now we know that we can get CI for approximable constant degree relations based on trapdoor hash from DDH, QR, LWE or VCR. And I'm not going to go through the details of this transformation, but you will have to believe me that using such notion of CI we can get relations interactability for all functions that we can approximate using constant degree. And let's be a bit more formal. I just want to tell you what notion of approximability we're considering. So we say that the function F is probabilistically computable by constant degree polynomial. If there exists a distribution over such polynomials, such that for every input X, the probability that F of X and P of X are far from each other in hamming distance is negligible. And where P here is this polynomial sampled according to the distribution which probabilistically computes F. Good. So now all is left to show you is how under LPM we have a sigma protocol for an NP complete language such that the batch challenge function has a probabilistically computed using constant degree polynomial. But I'm going to first recall period work where all they had to do is show that there exists a sigma protocol with an efficiently computable batch range. So they start with the classic sigma protocol for Hamilton City, which in particular uses some commitment scheme. And they use the observation that if this commitment scheme is efficiently extractable, meaning that we can extract the values underlying some commitment, only by seeing the commitment and possibly using some trapdoor. Then the batch on each function is efficiently computable. And more specifically, the batch of function given a, it simply consists of first extracting the values underlying the commitment a and then applying some polynomial time computation, which we denote here by V. So clearly if both the extraction algorithm are efficiently computable, then so is the batch on each function. So for us, we need something stronger. We need to show that the same protocol has that challenge, which is for list of constant degree. So we have the following observations. First, we observe that there exists an LPM based commitment scheme where the extraction algorithm can be from list of computable by linear function, which are a constant, which are special case of constant degree functions, of course. The problem is that this polynomial time verification is of high degree. Okay, at least in the protocol for Hamilton City, and it's not clear how we can probabilistically computed using a constant degree polynomials. But then we remember that using the cocleven transform, we can transform any polynomial time verification to a three CNF formula. And this is because here the three CNF satisfiability is complete for such poly time verification. And why is this useful? This is useful because three CNF formulas actually have probabilistic representations as constant degree polynomials. So, again, using a cocleven we can adjust a bit the Hamilton City protocol into a protocol where the batch on each function consists of first extracting the values underlying the commitment a formula rather than an arbitrary polynomial time computation. And now since we know how to probabilistically compute both the formula and the extraction using constant functions, we can probabilistically compute the batch challenge function using constant degree. And this completes our result. So let's conclude. So we show a new notion of CI for approximate relations. We show that we can get physics from standard assumptions through this notion. In particular, we get the first physics under the H and the PM another many natural questions we can ask. First of all, whether we can use this new notion to get applications, maybe it's apps or in complexity or hardness results. We can also ask questions related to the applications itself. So it will be interesting to see whether we can minimize the set of standard assumptions. So maybe we can get music only from the edge or only from LBM. It will be interesting to see whether we can extend our result to get statistical zero knowledge or statistical soundness soundness, which we don't get either as opposed to a period work. That's it. Thanks for listening.