 Hey everyone, I'm excited to talk to you today about our work on non-interactive batch arguments for NP from standard assumptions. This is a joint work with my wonderful collaborator Abhishek and Jenfeng. So before we get into the question of assumptions, what is a non-interactive batch argument for NP? Consider the scenario here between Alice and Bob, where both of them have access to a common reference trim or CRS. Further, Alice has case-app instances and once you convince Bob that all of them are true, specifically that there's a witness for each of them, such that the circuit C on input the instance X and the corresponding witness W outputs one. So the batching is essentially over multiple NP instances. So the non-interactive nature indicates that Alice just sends over a single message referred to as a proof to Bob, and then Bob can verify it in this proof. We're gonna require something stronger. We're gonna say that the proof must be publicly verifiable, meaning not just Bob, but anyone with access to the CRS should be able to verify the proof. So the argument indicates that any computationally bounded cheating Alice cannot produce an accepting proof if even one of the statements are false. So without further constraints, this is actually fairly easy to achieve. Alice just sends over all the witnesses W1 to WK to Bob, who can then take the witnesses and then verify each of them one by one. So for any non-trivial protocol, we actually require that the length of the proof that Alice sends over has to be significantly smaller than the length of the combined witness of the case statements. And what about Bob's verification time? Bob needs at the very least to read all the case statements and then we say that the overhead after reading the case statements is only some polynomial in the length of the proof. Okay. So what do we actually know about this problem from prior works? So there's this wonderful line of work started by Rangor Vardhamirath Prum which considers interactive batch groups. So it's interactive in the sense that Alice and Bob now are able to communicate over multiple rounds. So the security property is something actually stronger that security is required to hold even against a computationally unbounded cheating acts. So this line of work essentially constructs interactive batch groups for this class UP which is a subclass of NP where each statement has only a unique witness. And then we have the same non-interactive arguments as snogs for NP. These are non-interactive arguments where the length of the proof is significantly smaller than the size of the NP witness. So now if we were to take our batching of sentences and write it as a single language, they find it's a natural way of writing this language out. We just concatenate all K instances and the corresponding witness is the concatenation of the key witnesses. And then the snog property essentially gives us non-interactive batch argument because the length of the proof is going to be significantly smaller than the length of the combined witness. Unfortunately, we know of snogs for NP based on these strong non-falsifiable assumptions or in the random protocol. We actually relax the requirements from the verifier for it to be designated verifier meaning that only a designated Bob can verify the proof. Then we actually have non-interactive batch arguments for NP based on fairly standard assumptions starting with this work of Rakesh K. Holmgren and Kalai. And if you want to go to the publicly verifier setting, we have this wonderful recent work of Kalai Panathanyan who consider, you know, construct non-interactive batch arguments for NP based on new non-standard assumptions. I should note that these are falsifiable assumptions on groups with bilingual maps. So given the state of affairs, it's natural to ask that, you know, non-interactive batch arguments for NP based solely on standard assumptions. And in our work, we show that assuming the quadratic residuality assumption in addition to either the learning with errors or sub-exponential arguments of the decision of Diffie-Hellman assumption, there exists a non-interactive batch argument for NP where the size of the proof grows roughly square root in the number of instances times the circuit size where the circuit is the circuit in the corresponding definition of the set instance. You can see that with large values of K, this is actually significantly better than the triggered solution. So I wanna have time to go over all the details in our paper. So let me start off with some key insights. And what we actually want to do is leverage all this exciting recent work surrounding the Fiat-Shamir transformation and specifically the security of the Fiat-Shamir transformation. So for those unaware, Fiat-Shamir transform lets you start with an interactive protocol between a prober and a verifier where the verifier's messages are just random strings. And then it allows you to make a non-interactive in the CRS model where the CRS simply contains the description of the hash function. And so the prover can generate the verifier messages non-interactively by applying the hash function the present in the CRS on the transcript so far. And this is publicly verifiable since anyone with access to the CRS can verify the proof. So the security of the transformation has seen like some really, really exciting recent progress with connections to this correlation intractable hash function. And there's some really cool work. And as you can see, this is a long line of work. And I will touch upon this briefly later on in the talk. But for now, let's focus on the communication in both the protocols. As you can see, the total communication from the prover to the verifier remains unchanged with the transformation. So that gives us the following idea. Let's start with an interactive protocol for batch NP. Then hopefully we can apply the Fiat-Shamir transformation and then use hopefully one of these school exciting recent works and prove security of the transform protocol. So unfortunately, for all of these works, the security is only proven if one starts with a protocol that's statistically secured, these are also called interactive proofs. And as you're seeing, these mean that even an unbounded cheating prover should not be able to convince the verifier of false statements. I should note that there's no inherent reason for this requirement. This is just what we know from the state of art. So you might say, okay, this is not so bad, but I already indicated based on the prior works that we don't actually know of interactive proofs so batch NP. The best that we have is the UP. So instead, we choose a different starting point for applying the Fiat-Shamir transformation that we call dual mode batch arguments. So since the resultant protocols any day going to be in the CRS model, let's start with an interactive protocol in the CRS model that's going to achieve computational security, meaning that any computationally bounded cheating prover can not convince the verifier of false statement. So where's the dual mode? So dual mode comes into how the CRS is generated. So what you have on the left is the normal mode of the CRS generation. And this is the mode that's used with the actual protocol execution. And what you see on the right is the travel mode specified by some index I and this is going to be used solely in the security proof. So for starters, you shouldn't really be able to tell which mode the CRS was generated in, especially if you're computationally bounded. So what's so special about the travel mode? So the travel mode guarantees statistical security at index I meaning that even a computationally unbounded cheating prover cannot make the verifier except if the iron statement is false. So for all intents and purposes in the travel mode at least for index I, what we have on the right is an interactive proof. So this then provides a following security intuition for applying the future transformation to dual mode batch arguments when the prover is trying to specifically cheat on the ith instance, meaning that the ith instance is actually false. So we make this computation indistinguishable switch to the travel mode at index I, the CRS. And then hopefully we can rely on the future transformation because we've just talked about how this travel mode CRS protocol is actually statistically solved. And then we have a non-attractive protocol. So what does this dual mode batch argument look like? So recall that the prover is trying to batch proof K statements and we have K witnesses W1 to WK each of them. And we're gonna write them out in these rows that you see here. The CRS in our dual mode batch argument is just going to correspond to a commitment key K and we'll see shortly what this commitment scheme is. And the next thing that we're going to do is we're gonna have the prover commit to the witnesses in a column-wise fashion, meaning that each CJ is a commitment to a KLN vector. So there are M such CJs and the prover then sends across C1 to Cm to the verify. We require the length of this commitment depends only polyloginically on K. So next the prover and the verify interact via this information theoretic component, meaning that it doesn't require any cryptographic assumptions. We leave this as a black box for now and I'll come back to this later on. And then some function F which is determined by this information theoretic component is specified and the prover is required to open the function F applied to each of the witnesses. And then there is a final checks performed by the Wi-Fi. So what about this commitment scheme that we have? So the commitment scheme that we're gonna use is going to be a somewhere statistically binding commitment scheme. So these commitment schemes have already an inbuilt factor or more. So if you specify an index I, you get a commitment key K I star. And if you use K I star to compute the commitments, then it's statistically binding at the index I, meaning that with high probability, there is a unique opening to the I position of the web. So given how the commitments have been structured, what we essentially have is if each of these columns you use K I star, you have a single row that's statistically binding here. It's going to be the I throw. So we have it's statistically binding at Wi. And now if you look at the rest of the protocol in the travel mode, everything is information here. So for the I instance, we have interactive proof as desired. So to be actually able to compress it, we want this information theory component to be virtually friendly so that we can essentially use some prior works and then compress it using a hash function based on the YLW or sub exponential DTH. And I'm going to talk about this next. But I just want to say that we are going to actually also construct these some SSC binding commitments with an appropriate opening to F based on the quadratic residuality assumption. And I come back to what F is and we'll touch upon this later on again. Okay. So in whatever remaining time I have, let me give you some technical details regarding the information theoretic component that we use in our protocol. So recall, this is the template that I just showed you and we have this information theoretic component that I said I was going to talk about. So the first thing to note is that, just going to have one information theoretic component. We're just, we're going to have K copies of the same information theoretic component and correspondingly K openings. And for the purposes of this talk, it suffices to look at a specific, a single copy of this information theoretic component, specifically in the Ith copy because that's mainly what I care about proving security. But before I can go into any further detail, I need to take a small detail and revisit something that I had earlier mentioned in the talk regarding the future retransmission. Remember, this is the slide that I've shown you for the future retransform. And how does one really go about proving the security of this transformation? So let's look at the interactive protocol on the left. And let's define this notion of a set of bad betas. So what does that mean? So once alpha is set for statements that are not in the language, we say a beta is bad. If there exists a gamma, that leads the verified to accept. So, you know, verifier is accepting on false statements. So in the interactive setting, the prover has no control over the betas. It sends over an alpha and the verifier sends over random beta. But that's no longer quite true in the non-interactive setting because what the prover can do is it can try different values of alpha until it reaches, gets a beta that it likes. So essentially for security, we want that any computationally bounded cheating prover cannot find an alpha such that it results in a bad beta. So if the hash function satisfies this condition, we say that the hash function is correlation tractable for bad betas. So this is then the following template for proving a protocol with the future transformation. So we start with the protocol that has the set of corresponding bad betas. And then we then have a hash function that's correlation tractable for this set of bad beta. And that gives us a transform protocol that's secure. So we actually need to construct these hash functions H. And so what do we know about that? So it turns out if this set of bad betas can actually be computed in polynomial time, then this wonderful work of Byker Chen-Shehan show us that you can construct these hash functions simply based on NWA. And if further, these bad betas can be computed in low depth, specifically in DC0. And these are circuits that are polynomial sized but have constant depth with threshold gates in them. Then you can actually construct such hash functions simply based on some exponential dgh. And this is wonderful work, recent, very, very recent work by Chen and Shehan. So given this understanding of what kind of bad betas and what kind of assumptions that we have. So if we want this information-generated component to be fit from your family, all we need to show is essentially that the corresponding bad betas can be computable in DC0. So the building block of this information theoretic component is going to be this wonderful protocol introduced by Shetty called Spartan. And specifically we're going to focus on the information theoretic component of the Spartan protocol called Spartan code. So just for context, Spartan is an interactive protocol used to prove NP statements. So in the Spartan protocol, note that the function f is simply a linear combination of the bits of the witness. And the coefficients of the linear combination which does the sigma j's are specified by the Spartan code. So for those familiar, the Spartan code primarily consists of the sum check protocol. But for this talk, I'm going to keep it at a very, very high level. And I'm going to say that with the proven sense across in most messages, there's some polynomial g of x. And what the verify sense across is a random beta that is samples from some specified field. So we talked about bad beta. So how do we define the set of bad betas? The set of bad betas is defined to be all the roots of this polynomial g of x minus g star of x. g of x, as you can see is clearly the polynomial that the proven sense. And g star of x is the true polynomial and non is proven as that. I won't go into the details, but this is essentially how g star is defined. So in our work, we show that once you start with an appropriate field f, the set of bad betas can actually be computed in DC zero for the Spartan code protocol. I want to re-emphasize that, this is a very, very high level overview of both the Spartan and some of the ideas that we use in our work. And I would defer you to the paper for details. Okay, so putting everything together, we saw that the function f for our protocol is simply a linear combination of witness bits. So in fact, what we have is, or what we require is a SSB with linearly homomorphic opening that we construct based on quadratic reciprocity. It actually requires some additional copies that I'm really not going to get into because that's going to make things a little more complex. Again, I request you to look at the paper for further details about that. And then for the information theory component, we run k copies of the Spartan core. And show that the set of bad betas can actually be computed in DC zero for this. So this then results in our main theorem, which I have restated here. And I just want to say that in followup work, we actually construct batch arguments for NP with improved parameters. And furthermore, we actually show that you can apply these batch arguments to construct the allegations teams for all polynomial time computation. And that's all I have today. Feel free to send me or any of my co-authors for that matter, an email if you have any questions. And one of us will also be available to answer questions during the live session. So thank you so much for listening.