 This is a paper on a new simple technique to bootstrap Barris lattice zero-knowledge proofs to QROM secure NIC case. My name is Xiu and I'm from ICE. So this is our result. We provide a simple semi-generic method to construct QROM secure lattice-based zero-knowledge proofs. The new technical tool that we develop is an extractable linear homomorphic commitment, which I might just call extractable in HC. What I mean by this semi-generic is that it doesn't work for any sigma protocol or public-cointeractive protocol, but it only works for this limited class that has a linear response. But this is very natural and it's satisfied by a lot of lattice-based public-cointeractive protocols. So starting from this, we can apply our linear extractable linear HC and then obtain a QROM secure NIC case, which is also an online extractable proof of knowledge. Let me first provide the background and motivation. So in this talk, we'll be considering non-interactive knowledge, NIC case. In NIC case, there's gonna be a statement X that's in this language L and any prover that holds a valid witness W for the statement X shall be able to construct a proof pi. And this proof pi will convince to the verifier that this prover knew this witness W. There's gonna be two security properties that we require. So one property is zero knowledge and this states that for a cheating verifier, given this proof pi, this proof pi leaks no information about this witness W. And the only thing that the cheating verifier gets to know is that the statement X was actually in the language or not. We also want to consider a secure notion for cheating provers and we'll be talking about proof of knowledge in this work, which is strictly stronger than soundness. So what this tells us is that there's going to be an extractor such that for any cheating prover that outputs a valid proof pi, it's able to extract a valid witness W that satisfies this type of relation. And this implicitly implies soundness because if this proof pi can be used to extract a witness W, then it means that the statement X was actually in this language. In this work, we'll be considering NICKs in the random oracle model. And in the ROM, all the users in the system, the prover or the verifier, they're gonna have access to the random oracle. And NICKs in the random oracle model is heuristic some extent because it can only be secure, it can only prove insecure in the random oracle model. But this is typically fine in practice because this results in the most efficient and practical schemes. So there's two types of ROM that we can think of. So if we have a quantum adversary, then this quantum adversary can evaluate real-world hash functions over qubits in the real world. So we want this cure ROM to model this capability as well. So while in the classical ROM, it only made classical inputs and received classical outputs. In the quantum ROM, we'll allow this adversary to query inputs in superposition and receive outputs in superposition as well. And this creates a lot of difficulties in the proof of cure ROMs. So there are a lot of proof techniques in the classical ROM that we take for granted, which seems hard to import to the cure ROM setting. The first two are observing the adversary's input query and also knowing what this corresponding output was with respect to this input query. And why is this difficult? So the reason is that since these are in superposition, if we measure it, it will collapse to a single state. And if these input and output queries were entangled with the quantum state of this adversary, then measurement might disturb the adversary's state. So the adversary might actually notice that it was being measured in the cure ROM. And this doesn't show up in the classical setting because in the classical setting, we just write proof saying that the adversary outputs and outputs X to the random oracle, we sample random Y. So we know these X values and Y values, definitely. But in the quantum setting, this is a bit difficult. Also, another issue is adaptively programming the random oracle. And this is because the quantum adversary may query all the inputs in superposition to the random oracle. So at that point, it seems as though the random oracle is gonna be defined on all outputs, meaning that there's not gonna be any notion of adaptively programming the random oracle while the game transcends. And these three seemingly very difficult problem. Nowadays, well, the past two or three years, there's been a lot of work regarding this. So we now know many ways to overcome these seemingly difficult problems. However, they do not come for free as in the classical ROM setting and it requires a lot of complications. And sometimes there are still really difficult things that we have to overcome. So that was the issue about whether we're talking about classical or quantum ROM, but there's actually difficulty just in using quantum adversaries, regardless of this ROM being classical or quantum. And one representative example is rewinding type of arguments. This is a type of proof where we fix a random R for this adversary and we run the reduction while the reduction algorithm programs the random oracle up to this point, well, it proves the random oracle. And what we do is that we will rewind the adversary to some point, let's say, to this query X prime. And we will reprogram the random oracle on this X prime. And from that point on, we run the adversary again on the same random as R. So we rewind the adversary and we rerun it. And the issue of this in the quantum setting is that there's no notion of fixed randomness. So this general technique might not really work in the quantum setting. And now let me explain a bit more about lattice space, lattice space, Q-ROM, N-I-Z-Ks and how we construct them. All the N-I-Z-Ks that we'll be mentioning this work will start from a Sigma protocol or in more general, a public coin interactive proof system. If we focus on Sigma protocols, it's very standard to define two notions on its verifier zero knowledge and special soundness. Where a special soundness is very interesting because it states that given two valid transcripts with the same A first flow, there's an efficient way to extract a witness W. Let me explain two famous ways to convert a Sigma protocol into N-I-Z-K. The first one is the Fiat-Schmidt transform. And in the classical setting, this is excellent because it works for any type of Sigma protocol and the proof overhead is minimal. One relatively small downside is that the proof of knowledge requires rewinding. So it does incur a reduction loss. There's also the UNDER transform, which was originally created for this quantum setting, which I'll explain later, but it works for the classical setting as well. And it works again for any Sigma protocol, but we have to restrict the challenge set to be small. And due to this, we require a lot of power repetitions and we also have to include a lot of garbage terms to make the UNDER transform work, where this C is the size of this challenge space. So there's gonna be a lot of blow up here. The good thing is that the proof of knowledge is straight-line extractable. So it results in a tighter proof. It doesn't require rewinding. But still the general rule of thumb, I guess in the classical setting, is that we wanna work with a Fiat-Schmidt transform because it's just really good. And UNDER's transform, the overhead is quite big when the challenge set, the original Sigma protocol can have a challenge set that's as large as exponential. However, if it's capable of using small challenge set to begin with, then sometimes UNDER transform is competitive too. The situation in the quantum setting is a bit different. So the Fiat-Schmidt transform no longer works for any Sigma protocol, but it only works for a class called collapsing Sigma protocols. Also the proof of knowledge, it works through rewinding, but there's gonna be a lot of small subtleties that we have to take care of and technical issues that we have to take care of. And it will lose a bit more in the reduction law, reduction compared to classical setting. Compared to this, the UNDER transform, it's much better because it's basically the same argument that we wouldn't make in the classical setting. So it's just, it's very similar and easy. So the obvious downside now is that not all existing Sigma protocol are known to be collapsing, so we have to show that it's collapsing. And this is not, this is sometimes very tricky, I guess. So the good thing about UNDER is that if we can show in the classical setting, then it also will work fine in the quantum setting too. So it's very easy to analyze. For general two N plus one round PCIPs, the situation for the Fiat-Schmidt transform the same, but the UNDER transform, it's a bit different. It doesn't work for any two N plus one, but it only works for five round protocols with a specific type of challenge sets. And here, this might be able to be expanded to multi rounds, but we don't know of this yet. Let me talk a bit more about this QROM securification transform. So this was shown by these two groups in 2019. And they start with the collapsing Sigma protocol. And they first argue using a rewinding argument that this can be transformed into a Sigma protocol with a quantum proof of knowledge, where basically this is a Sigma protocol with a proof of knowledge where the adversary is also a quantum adversary. And in the second step, they argued using a new reprogram technique to show that the Fiat-Schmidt transform applies and it constructs a QROM securing ICK. So this is the two steps that they took. And the thing is, it's really not clear if all these existing schemes are collapsing. So this part is the difficult part now. And also this reprogramming step, most of the times it will incur reduction loss, a bit more than a classical setting. So let me explain a bit about the recent lattice-based PSIPs in the classical ROM setting. So I only talk about the Fiat-Schmidt with abort types of Sigma protocol, so I won't be talking about the Stern type protocol in this slide. So the most basic one is the original Liboshevsky 09012 paper, which provided a relaxed proof for SIS and LWB relations. And interestingly, the two papers show that this is a collapsing Sigma protocol as well, with a slight increase in the parameters. So we know how to apply the Fiat-Schmidt transform securely in the QROM setting. For these very basic Sigma protocols. For the more recent ones, like opening to commitments, range proofs, one of many proofs, we don't know if these are collapsing Sigma protocols. So we don't know how to apply the Fiat-Schmidt transform, but we do know how to apply the UNDER transform because it's still a Sigma protocol. The caveat is that these schemes can use exponentially large challenge set, but we have to restrict them to be polynomial size or even smaller than that, to make UNDER transform work. So there's gonna be a large blow up here, compared to the classical one. And finally, there are these new five round schemes or even larger than five round schemes, where we don't know how to apply the Fiat-Schmidt or UNDER transform at all. I wouldn't know that this modified UNDER transform for the five round protocol might work, but there are a lot of details that we have to check to really know for sure if it works or not. So this is the current situation. And this brings us to our main question of this talk, which is, can we get the best of the Fiat-Schmidt and UNDER transform and even a bit more? So the Fiat-Schmidt transform, it requires no overhead and it works for exponentially large challenge set size. UNDER, it works for any Sigma protocol and it has a tight reduction, which is a straight line extractable proof of knowledge. However, there are these schemes that are not covered by Fiat-Schmidt or UNDER. So it will be interesting to see if there are other transforms that covers protocols that lie outside of these two schemes. And this brings us to our result. So our result is a new transform that provides a partial answer to the previous question. It's a semi-generic approach that sits somewhere between Fiat-Schmidt and UNDER. So these are the properties. And the first one is that it works for many lattice-based PCIPs or in general, any PCIP with a linear response where this notion will become clear in the later slides. The overhead of our transform is larger than the Fiat-Schmidt transform while it's much smaller than the UNDER transform when the challenge set size is exponentially large, which is the case for many of these lattice-based PCIP protocols. The reduction loss is smaller than Fiat-Schmidt since it's a straight-line extractable proof, like UNDER. And our security proof is very simple and it's almost very classical. So it requires minimal knowledge on quantum computation. And finally, it works for PCIPs where Fiat-Schmidt and UNDER are not yet known to work. The new technical tool that we develop in this work is called extractable linear-homophic commitment. And with using this extractable in HC, we can start from any sigma protocol with a linear response. And this is a very natural sigma protocol and it's satisfied by many sigma protocols. Here, if we start from this primitive, we can combine with an extractable linear HC and bypass this rewinding argument to directly get a sigma protocol with quantum proof of knowledge. And then we can use prior reprogram techniques to get a QROM secure NIC came. Or, this is a simplified extractable linear HC. But if we start with a more structured extractable linear HC, which is not that different, we can directly get to this QROM secure NIC came without even having this reprogram technique. So this will provide us the most tightest and simplest transfer if we want NIC came. If we only want a quantum proof of knowledge, then we can just use this transfer. We will explain our idea in a bottom-up approach. So we'll start from this very base example, which is a sigma protocol for the SIS or LWB relation by Louis Boshatsky. So this will be a scheme that's here in the classical ROM. Here, we have one matrix A and a vector U and the witness is gonna be a short vector E satisfying this relation. The proof we will sample a short vector R from some distribution, let's say a Gaussian distribution and create this vector W and sends it to the verifier. The verifier samples a short element from this RQ and sends it to the prover. The prover sets Z as C times E plus R and it will do this rejection sampling step, which is not really important for this talk to maintain the shortness of Z without revealing the witness E. It will then send this Z and the verifier will just check that Z is short and this equation holds. So basically this is Schnorr's protocol, but using lattice language. The main question here is that to eventually show proof of knowledge of the Fiat-Schimmer N-I-Z-K, we want to extract a witness from a single transcript. But in order to invoke this special soundness of this sigma protocol, we need two transcripts. And the question is how to obtain these two valid transcripts without rewinding. So this is the first step. We add a linear homomorphic commitment. So when the prover computes this W, first flow W, we will also commit to the witness E and to the randomness R. And we'll send this commitment, come E and come R to the verifier. The verifier will sample challenge C and sends it to the prover as before. And now the prover will set this Z, which is, when you look at it here, it's a linearly homomorphic operation over this E and R, the committed messages. And it will also do the same for this committed randomness now. So it will create a delta Z randomness from the delta E and delta R. And now the prover will send this delta Z to the verifier. What does the verifier do? So it does the normal checks and it also checks that this commitment, come E and come R was constructed correctly. Using this Z, the Z and the opening delta Z. So if this equation holds, then the verifier will accept. And this is implicitly checking that this commitment, two commitments were created honestly. And this is still standard Sigma protocol because if this commitment is, let's say, hiding, then we have honest verifier, zero knowledge and special soundness. We just can ignore these purple elements and we just invoke it using the base Sigma protocol. So right now it doesn't provide us anything new. So we want to, as a second step, add extractability. So assume this commitment scheme was public keyed. And in the original public key, it's just gonna be a random bit string. But in the proof, we'll swap this into a new public key star with a trapdoor town. And this trapdoor town will allow us to extract from any honestly generated commitment. So given a commitment of this form, then extract com will extract this element X. So now what we wanna show using this added extractability is that only given this first valid transcript, we want to extract a witness E. An incorrect naive argument will just be to say, well, run this extractor on this commitment E, which is given to the verifier in the first round here. But this is obviously wrong because there is no guarantee that this commitment E, com E, is valid. The verifier or the simulator or the reduction algorithm only knows that this commitment, this part is correct. But since the commitment, we already know that this opens to Z and delta Z. It seems like there's no place to use this trapdoor right now. So this third step, we will now show how to argue extraction correctly. So the simple observation is that this commitment E and commitment R is prepared before the challenge C. So for simplicity, we'll assume that this challenge set is small right now, even though eventually we want this to be exponentially large. For now, we'll just think that this is small. And we'll further assume that there's, we are guaranteed that there is another valid transcript that V will accept eventually, the verifier will accept it. Then with these two assumptions, what this extractor will do, the Sigma protocol extractor, what it will do is that it will just run through all the challenge and create commitment for all this challenge I and extract from this commitment. So by assumption, since this verifier will accept on this challenge C prime, it means that for C prime, this commitment Z created for the C prime, it will be valid. Therefore, this is guaranteed to be a valid commitment. So the trapdoor will extract this Z prime as promised. So if we're given this first flow, I mean first transcript, then we can run this extractor, which will run through all the challenge set. Then it will find Z prime at some point in polynomial steps and we'll be able to extract these two valid transcripts. The remaining question is how to make this argument work for exponentially large challenge set. And here, the point was that we needed this to be polynomial or large to run through all this space. And an easy modification is that we're just gonna make this probabilistic. So we're gonna have an upper bound large N and we're just gonna keep on running this until it succeeds. So we'll sample a random challenge and just keep on running this extraction. And the question is, does this work and how do we set this large N? So this is a simple statistical argument. So let's assume an adversary has non-negligible advantage epsilon right now in completing the sigma protocol. Then using a very standard counting argument, there has to be at least 2D times epsilon challenges for which A would have been able to answer. Therefore, we can set this large N to be roughly lambda, both lambda secure parameter over epsilon. And after sampling this many random challenges, then with overwhelming probability, it will hit another valid commitment. And this analysis works regardless of the adversary being quantum or not because it works the moment the adversary is able to produce a valid transcript, then we can invoke this reduction, regardless of A being quantum or classical. So this is how we make it work. So summary so far is that using this simple extractable in AC, then we can get a sigma protocol with a quantum proof of knowledge. Extending this to the QRAM secure NIZK setting is very easy, which is covered by this blue line. We don't want to have time to explain it in detail in this talk, but it does follow very naturally from previous results. Some details worth mentioning is that in our instrument transformer, we require a slightly stronger flavor extractable in AC because the sigma protocol is only competition on this very far zero knowledge. Also the analysis extends easily to multi-round. And since the commitment key was a random binary string, we can just use the random oracle to output this public key in the real protocol rather than relying on a CRS, a common reference stream. Also, when we look carefully, our NIZK is actually dual mode because depending on the public key we use, it will be statistically sound or statistically zero knowledge. Finally, let me just give a brief overview on how to construct such an extractable in AC. So in fact, the dual reg of PK is already what we want, an extractable in AC. So the commitment key in the real world will be a random binary string. So this will be structured into a matrix A and B. So this will correspond to a dual reg of PKE public key that has no secret keys. And to commit, we're just going to encrypt using this random public key. And here the randomness here for this extractable in AC will just be the randomness for this encryption scheme here. And linear homomorphism follows naturally because dual reg of PKE is linearly homomorphic. So you can just get the C times E plus R term here. And in the extraction mode, we'll just set this public key into an actual dual reg of public key with a secret key. And using that secret key, we can always decrypt this and obtain this C times E plus R term back. So this is a very basic way of constructing an extractable in AC. And in the paper, we also provide a second method that uses an entry like public encryption scheme to get better efficiency. As a concrete application, we benchmark using this BLS-19 protocol, which provides an exact sound 5-round PSIP protocol. First of all, the finish may transform. It's not obvious whether it applies or not because we don't know if this is a collapsing Sigma protocol or a 5-round protocol. And also the under-transform doesn't work because it's not a Sigma protocol, but this modified under-transform might apply. So we will benchmark using this modified under-transform assuming that it works. So the normal classical ROM N-I-Z-K, it's gonna be 812 kilobytes, the proof size. If we apply the modified under-transform, it's gonna result in a 45 megabyte proof, which is roughly more than 100 times larger than the classical one. And if we apply our extractable LIN-HC protocol, it's only gonna be at most 2.6 larger, which is 2,071 kilobytes. So compared to under-transform, it does provide a smaller proof size. So this was our talk, and this is a summary and open problem. Thank you for listening.