 Yeah, yeah, but you want to go? Yes, and then Okay, all right, okay Okay, we're on the final talk of the session and it's going to be snags from pay For pay from sub-expansal VDH and QR and James is going to give the talk All right. Yeah, thanks for the introduction. This is joint work with Ruta, Dakshita and Akshay So I wanted to start really quickly by just kind of quoting the headline result of this paper So what we show is that for any language that can be decided in time T of M We give a non-interactive proof system that is succinct So the proof says in the verifier time are both n times two little of one. Importantly, this means it's going to be much fast or it's gonna be much more efficient than just trying to solve the original problem directly And we have security against any adversarial prover that runs in time at polynomial and T Where this security is actually based on the sub-expansal hardness of both Decisional Diffie-Hellman and quadratic resduosity with a security parameter. That's T to little of one So for this talk, I'm actually going to be presenting a slightly weaker intermediate results Where instead of just relying on having a time bound for a computation We're also going to rely on having a space bound s And we allow both our proof size and our verifier time to go linearly with the space bound So this is kind of our main technical contribution in the paper And there are known techniques from a couple of prior papers That actually allow us to remove this dependency on the space and so get back to the main result But yeah, this is going to be kind of what I want to present here because it's nicer to give it in a in a single package So putting this in context of some prior works on Snarks. There's been a long line of work starting in the 90s Trying to construct Snarks either in the random oracle model or from various strong cryptographic primitives such as obfuscation Some optimal fully homomorphic encryption or other such primitives And only more recently have there been works that have started looking at building Snarks from falsifiable assumptions So the first of these was a paper of Kalaipanathan Yang Which was able to build from a falsifiable assumption, although it was actually a new falsifiable assumption So it's not one that had been studied in any prior works Following up on this, there's a paper of Jowari et al, which actually shows that from sub-expansion LWE So something that's been a little bit better studied. We can actually get Snarks for deterministic computation Provided that we're willing to limit ourselves to only deterministic computations that can be computed with a bounded depth circuit The next kind of two relevant works that I wanted to look at are a pair of works by Chowdhury, Jane, and Jin That actually instead of considering deterministic computations and building Snarks for those actually considered Languages that can be understood as a batch NPE statement And show that either under sub-expansion DTH and QR or under polynomial learning with errors We can actually get Snarks for these two or for these sorts of languages And this latter paper along with a separate work of Kalei, Vekatanathan, and Jin actually shows that under polynomial LWE, we can actually take These Snarks for batch NPE and turn them into a Snark for a deterministic computation But what we want to do with this work is actually essentially do a similar sort of thing for the first Chowdhury, Jane, and Jin paper And that's actually what we do is we were able to show that we still get these Snarks for deterministic computations But based on sub-expansion DTH and QR as opposed to basing them on polynomial learning with errors So to give kind of a brief overview of the techniques that go into this paper This is going to kind of happen in three steps So the first step is we take a notion that was previously defined called Fiat-Schimmer compatibility And we show how to actually take this notion and make it work for argument systems instead of just for proof systems We then introduce a new interactive argument That can actually work for any language decidable in time t and space s And then finally we show that this new interactive argument actually can satisfy the definition That we gave for Fiat-Schimmer compatibility for arguments And that's enough to tell us that we can actually compress it into a non-interactive argument which gives us our final result Okay, so starting with the first point Kind of an overview of Fiat-Schimmer compatibility as it was originally defined for proofs So we'll say that a proof is Fiat-Schimmer compatible if it is round by round sound Meaning essentially at every round there is only a small number of possible bad random challenges the verifier can give And as long as the verifier never makes any of these small number of bad challenges No prover can actually succeed in fooling the verifier On top of this we're also going to require that this set of bad challenges can actually be efficiently enumerated So what we try to do now is want to say can we now take this notion and Generalize it to also apply to arguments In particular the notion of round by round soundness is very specifically defined for proofs And it's not immediately clear how we can define it for arguments Although there have been previous works that have been successful in actually starting from an interactive argument and then compressing it In similar ways to what we can do with Fiat-Schimmer compatible proofs So our notion for how to actually make this work for arguments is we're actually going to consider arguments that have multiple modes And what we intuitively want to say is that no matter what strategy the prover Uses there should be some mode where the proof is actually Fiat-Schimmer compatible under the original definition So there's This this idea of using an argument with multiple modes and hoping that one of the modes is correct Has been used in a variety of previous works both in the current construction of snargs and in other constructions So more formally what we're going to do is we're going to say our setup algorithm In addition to whatever other inputs it would normally take is also going to take a mode index i And based on a mode index it's going to in some different way construct the CRS and Then is also going to give us some additional auxiliary information ox that other parts of our definition will use Now kind of the the interesting part is what we're going to say is we're actually going to introduce this predicate phi Which intuitively is going to capture did we actually choose the right mode for this particular prover strategy? So phi is going to look at whatever instance the prover is attempting to prove It's going to look at the first prover message alpha one and it's going to look at this additional auxiliary information ox And if the predicate is satisfied then what we want to be able to say is that our protocol or our argument system is Actually round around sound with these efficiently innumerable bad challenges So I have to be a little bit more careful in this in particular Based on how we defined it right now. There's no guarantee that a predicate is ever satisfied And so somehow we need to capture the fact that this product should be satisfied a good amount of the time Because otherwise it's not going to be particularly useful to us that it actually gives us this round around so So the way we capture this is in what we call a non-trivial predicate So for this we're going to say so as we have a security game We're going to start by randomly sampling some mode index I We're going to run setup with respect to that mode And then we're going to give just the CRS But not the auxiliary information to our adversary who is then tasked with outputting an instance X and a first prover message alpha one So we'll say that our predicate is non-trivial if for any efficient adversary a that actually outputs something not in the language with Noticable probability we actually have that conditioned on this instance not being in the language The first message that it outputs must actually satisfy the predicate also with non negligible probability And so now this is going to give us some sort of sense of saying that the predicate should sometimes be satisfied And so we'll say that an argument system is Fiat Schumer compatible if it is If it satisfies the definition on the previous slide with respect to a non-trivial predicate Okay, so now that we have some sort of notion of what actually is a Fiat Schumer compatible argument Let's actually look at the structure of the argument that we create That's not my next slide my next slide is is saying why do we actually want to have this definition the way we did? And so essentially what we're going to say is any argument system That satisfies our definition of Fiat Schumer compatibility can actually be compressed into a non-interactive argument system in particular by Using correlation intractable hash functions So we're going to do is we're going to have the prover actually generate all the verifiers random challenges But they're required to do so by putting the transcript into this correlation intractable hash Where what we want to say is essentially that What to say is that I lost my words for a second We want to say that it should be hard for the prover to actually come up with a bad challenge And that's in particular why we want the bad challenges to be efficiently innumerable is because that actually allows us to use these correlation intractable hashes So for our particular work, we actually use a correlation intractable hash from a paper of Jane and Jin Where we say under DDH we actually have these correlation intractable hash functions where efficiently innumerable just means innumerable by low-depth threshold circuits So as long as we can show that our eventual argument system can actually be or can actually have its bad challenges Enumerated by such circuits that would be sufficient for us to actually use this correlation intractable methodology So with this in mind Our actual proof of our sketch of proof of security is going to say well suppose that there was some adversary That was able to break the soundness of our non-intractive argument system using these correlation intractable hash functions The first thing that we're going to notice is that because this adversary is able to break soundness with noticeable probability It actually has to output an instance not in the language with some noticeable probability And so by the non-triviality constraint We know that whatever this adversary outputs must actually satisfy our predicate Phi with non-negligible probability And now we're going to say as well if Phi is satisfied We actually have that our argument system has the standard notion of Fiat-Chimeric compatibility And so we're able to essentially repeat the argument from Joel et al to say that any adversary that is able to Break soundness can actually break the correlation intractability of the underlying hash function Okay, so now is when we're actually going to get on to looking at the structure of our particular interactive argument So the kind of key idea behind this construction is recursive proof building So we're going to say suppose we have some argument system for smaller computations Let's use that multiple times in order to build up an argument system for larger computations So very similar ideas have actually shown up in a prior work of Rangold Rothblum and Rothblum Although their particular proof system there is somewhat different from ours because they had a different goal in mind So let's imagine we have our prover and our verifier and our prover has some time t computation Then they want to prove to the verifier was actually done correctly So the proof is going to do is they're going to split up the time t computation into k smaller blocks each a size t over k Where k is some parameter that we're going to end up setting carefully And then the proof is going to imagine snapshots of the computation at each of these Boundaries between the t over k blocks So s0 is whatever the the computation starts at s1 is the state after t over k steps and so forth So the prover is now going to send all these intermediate snapshots s1 through sk minus one to the verifier and this will define K computations each of size t over k that the prover now just needs to show the verifier all of those were done correctly and So the way that the proof is going to do this is by engaging in k parallel invocations of a Protocol that works for any time t over k computation Now we have to be a little bit careful here to make sure that the size of these proofs doesn't blow up when we recurse So in particular when the prover sends the first message of these k parallel invocations They're actually going to send them under the hood of a compressing commitment And for security purposes it turns out that this compressing commitment will actually want to be somewhere extractable Now similarly, we don't really want the verifier to be sending over k separate random strings because again that will likely cause the The size of the proof to blow up So we're just gonna have the verifier sample a single random string and use that as the random challenge for all k of these parallel protocols So at the end we're gonna have this When have these protocols happening in parallel It's going to go for however many rounds are needed and now the verifier really just wants to check that all k of these argument systems Would actually have succeeded so the t over k verifier should have accepted all k of these sub computations Unfortunately the verifier can't check this directly because all the the transcripts are under a commitment And so the verifier is actually going to engage with the prover in another argument Where the prover attempts to show that whatever they committed to in this first phase actually does correspond to k accepting Transforms and so this can actually be understood as a batch NP language And so this is going to be where we actually use previously known arguments for batch NP All right So we now just want to show that this argument system is actually a feature mirror compatible under our expanded definition And kind of the first step towards doing that is actually defining the predicate that will tell us when it is feature mirror compatible under the original definition So intuitively what fire is going to do is fire is actually going to capture whether our somewhere extractable commitments are actually extractable at A position where the prover sent invalid snapshots So we want to check to make sure that whatever proof we can actually extract actually corresponds to an invalid t over k computation We have to be a little bit more careful than this so Because we actually need extraction to be working at every level of our recursive protocol not just at the top And so we're actually going to define our protocol or sorry. We're going to find our predicate Recursively so we're going to start by defining a predicate at the top level phi t That's going to check if these t over k snapshots correspond to an invalid computation And if they do then it will actually extract the first message of the corresponding proof from the first message of the time t proof and then Check to see if phi t over k is actually satisfied given that first message So given this predicate definition, we actually have to argue somehow that it's non-trivial in order to Tell us that we actually have our feature mirror compatibility So kind of the key observation that allows us to get non-triviality is as long as the overall time t computation is Not actually correct. There must be one pair of snapshots that is invalid right if the if the computation does not go from s0 to sk in T steps there must be some I such that the computation doesn't go from si to si plus 1 in t over k steps So what this tells us is that if the prover gets no information about the extraction index? And we're just choosing this extraction index randomly Then we have at least a one-on-k chance of just randomly selecting the correct index that allows us to extract an invalid pair of snapshots And now the nice thing that we want to use here is our somewhere extractable commitments actually hide their extraction index from any efficient Adversary and so in particular no efficient adversary can actually take this probability and make it smaller than 1 over k minus some negligible amount So if we put this all together and do a little bit of arithmetic We'll see that actually the probability that all of the extraction indices are good is at least 1 over t minus some negligible amount Where because t is actually going to be in this case the number of possible modes of our argument system This will actually satisfy our required definition of non-triviality So the final thing we have to do is we have to show that as long as our predicate holds the argument system actually gives us one by one sadness with Efficiently innumerable bad challenges So we're going to actually split this into two phases The first phase is what I'm going to call the emulation phase Which is where the proof on the verifier are engaging in these k parallel invocations of the smaller protocol So in this case, we're going to say is well, we know that the predicate holds and So whatever proof we're able to extract from this phase must actually be for a false statement And so we can say well by induction We can say that whatever proof we're extracting actually itself has round-the-round soundness with efficiently innumerable bad challenges So we're just going to say a challenge is bad for overall time t comp or for overall time t proof If it would be bad for this particular extractable t over k protocol And now based on the the fact that the number of that challenges is small and they can all be efficiently enumerated We can say that in the emulation phase. We do actually have the necessary requirements on our bad challenges So the second phase to consider is what I'm going to call the batch and P phase So this is where the prover and the verifier are engaging in the batch and P argument To show that all of the emulated arguments would have actually accepted In this case, we're going to do a very similar thing. We're going to say well as long as there were no bad challenges previously Whatever transcript we're actually able to extract from the emulation phase would be a rejecting transcript And so we're going to use the facts about the particular batch and P argument We use which is that as long as the extractable witnesses false or the extractable transcript is false the batch and P that we use will actually be We'll actually have round-the-round soundness With these efficiently innumerable bad challenges So again, we're just going to say in the batch and P phase A challenge is bad if it would be bad for the argument that we're using to prove the that the emulation phase was done correctly Okay, so this kind of concludes the technical material. I had I have no clue how I'm on time But yeah, thank you and I'll take any questions Very fine for time. Okay Okay, which means you got time for questions You don't need to answer any questions brilliant, okay Well, let's thank this speaker and all the speakers again from the entire session And now let's coffee or something