 Hi, this is a talk about compact NIDKs from Standard Assumptions on Bilingual Maps. My name is Shu, and this was a joint work with Ryo Nishimaki, Shota Yamada, and Takashi Yamakawa. So this is our result, and our result is basically the same as our title. It's a compact NIDK from Standard Assumptions. And what we mean by compact here is that the proof side is going to be the proof size plus polylamda for all NP relation, where the circuit is the verification circuit which verifies this NP relation. And if this NP relation is computable in NC1, we further get a result where the proof size is witness size plus polylamda, which is very short. And the assumption that we use is matrix DDH, which includes at special cases the D-LIN or the SXDH assumption. We will first provide some background on CRS NIDK. So a CRS NIDK, it's between a prover and a verifier, and a prover holds a statement X and a witness W. They're both given a CRS from a trusted setup, a common reference frame, and the prover is going to provide a proof pi and the verifier either accepts or rejects. And we require three properties from CRS NIDK, which are completeness, soundness, and zero knowledge. Here completeness dictates that if the statement X is in the language, then the verifier should accept. Soundness is the opposite. So a cheating prover, if it does not have statement which is not in the language, then it should not be able to fold the verifier to accept. And finally, zero knowledge is that any verifier just looking at this proof pi cannot learn anything more about the fact that, more than the fact that the statement X that was used is in the language. So the witness in particular will be hidden from the proof. And the motivation of this work is very similar to most of the motivations of, well, a lot of the motivations for prior NIDK related work is that we want to minimize this proof size pi as small as possible. And here we want to minimize the dependency on this circuit size on this proof. And we want to base this from standard assumptions. So this is the state of the art in this regime. And there are like a very, a lot of work relating to making the proof size smaller or making the assumption more standard. So when you look at the first four rows, these are all based on very well-known assumptions. However, the proof size, when we look at them, they have a polynomial dependency. The circuit size and the security parameter has polemic dependencies. And when we look at the bottom two, the proof size are actually very small. They're either witness size plus polemna or circuit size plus polemna. So they only have an additive relationship here. However, when we look at the assumptions, one require FHE, which we only know how to make from lattices, or we require this non-static assumption called the CDHER assumption, which is basically a Ziffy-Hellman type assumption, which is non-static. So we want to kind of get the best between the two worlds. And that is what we do in this work. So we have the proof size as compacted spot. It's going to be C plus polemna or the witness size plus polemna, while the assumption is going to be standard XSDH or DLIN. We will now explain our approach. And as an intermediate goal, we will first construct a compact designated prover NIZK. So this is not a CRS NIZK, but we will show later how to compile this designated prover NIZK in a non-black box manner into our final goal of creating a CRS NIZK. So the first intermediate goal we're going to go after is this designated prover NIZK. And this is basically a CRS NIZK where, when you look at it, there's going to be a secret key here. So the trusted setup is going to provide the prover with a secret key now. Therefore, this whole system is going to be designated to this sole prover who possessed this secret key. However, the proof itself is going to be publicly verifiable, so the verifier can be anybody. So the high-level approach to construct a DEP NIZK will be in this format. So the prover will encrypt its witness using a secret key encryption key key. And at this point, we're not going to specify how this case is generated. The prover might be generating it, or maybe the trusted setup is providing the prover, the K, as the designated proving key. We don't know that, but we are only just going to state that the witness will be encrypted by this SKE scheme. And here, we are going to require this sign for text to only have additive overhead, meaning that it's going to be witness size plus polydominal where witness is the message right now. And this can be instantiated by, let's say, the CDH assumption. And the next step that we're going to do is that the actual proof here that we're going to include, it's going to be some authentication for the fact that the CT is an encryption of this W and this W, or that this witness satisfies this relation. So this authentication can think of it as a signature, or it's just a primitive that attests to the fact that the CT is a valid encryption of this witness. And here, we want this part, this authentication to be independent of the circuit size. Then everything will be compact. And to this end, we consider this notion of constraint, or this primitive called constraint signature. For those who you know, this is very similar to attribute-based signature. So in a constraint signature, there is going to be a public verification key. And a trusted setup will provide the signer with a secret key associated with this function F. And a signer with this message M can sign using this signing key associated to F and create a signature sigma. And correctness dictates that if this function F, taking this M as input, equals one, or it's satisfied, then the verifier will always accept this signature. This signature will be valid. And for unfortunately, what we are going to say is that a signer having this secret key, or the signing key with this function F, he will not be able to create a signature on a message which does not satisfy this function F. So that's unfortunately. And this final one is optional. It's context-hiding. And what it essentially means is that this signature does not leak anything about this function F beyond the fact that the function F satisfies this message M, or this message M is satisfied by this function. So it just hides this function. All right. And Kimwoo in 2018, they showed a way to construct a DP-NISC from constraint signature. And we see that actually if you start from a compact constraint signature, you can get a compact DP-NISC. And how do we do that? So it's very easy. It follows very from the Kimwoo transformer. So what the prover is going to be given by this process setup is that it's going to be giving this CS key and this SKE key. So let's look at this. So the secret key is associated to this function with K. And this FK is defined like this. So it's going to take this public statement X and the cipher text and it's first going to decrypt this cipher text using the secret key K, which is hardwired to this function, and get that decryption value back and it checks this relation. So if the prover correctly encrypted this witness W, then this will actually output 1 now because this will decrypt the witness. So what the prover can do is that it could just sign using its secret signing key. Because the signing this function will output 1 on this message M, on this message X, CT. So this will be a valid signature. So the verifier can verify this and thus verify the whole entire proof. And if this cipher text only has additive overhead and this signature is independent of the circuit size, then the overall proof here will be compact. And the security is very simple. So soundness, it just follows from the enforceability of this constraint signature because a cheating prover who has a statement, which is not in the language, there is no corresponding witness that will make this function equal to 1. So if it produces some valid looking proof, then it will be a signature on a function which outputs 0. So it directly breaks the enforceability of the CS. And zero knowledge also holds quite straightforward. Because first of all, if you use context hiding, then you can say that this signature will leak no information of this SKE key came. So at that point, we can invoke the SKE in CP security to kind of get rid of this witness W, which makes it zero knowledge at this point. So as intermediate goal, what we did is that we constructed a new compact constraint signature. And to do that, we first start from an adaptively secure compact cipher text key policy AB from the MDH assumption. And this is the new part. And we build on prior techniques using cybertext aggregation and the very recent extension of Qualchic we technique getting an adaptively secure KPE from similar assumptions. And we use this folklore conversion of a compact cipher text KPE into a compact constraint signature scheme. And that's how we get this compact signature constraint signature from the MDH assumption. So now finally, plugging in the keyboard conversion, we get a compact DP NISC from the MDH assumption. And this is already an improvement over our prior work, which where we constructed a compact DP NISC from a non static assumption, which was a different type of assumption. And just for reference, Kimu 19, what they did was that they constructed a compact constraint signature, and hence a DP NISC from LWB. So the remaining part of this talk is, how do we generalize this CRS NIC Kim. Because when you think about it, we still have this secret proving key part. So the question is, how do we remove this from our DP NISC. And this will be our second part of the talk, which is removing the secret proving key. And the key notion that we will really look into is this notion of decomposable online offline efficient constraint signatures. So as a first attempt, what we are going to try to do is we're going to try to get rid of the secret information from what we provide to the prover at setup. So a first natural approach is that we are going to get rid of this seek SKE secret key. And here we're going to let the prover sample the SKE secret key on its own now. And with that modification, what's going to happen is that we're not going to be able to hardwire this secret key K in the function F anymore because trust us that doesn't even know about that. So what's going to happen now is that this function F is simply just going to take this SKE secret key as input now. And it is still going to evaluate the same function where this K is now given externally rather than just being hardwired this function. The question is, what is the proof going to look like now? So as before, the prover will still encrypt the witness W with the secret key K and it's going to create this ciphertext. And it will still be able to sign because it's going to just provide all these three inputs. And this function Apple output one, so it will be able to create this valid looking signature. However, the question is, what is the actual proof pie going to look like? Obviously, if we included this K with the proof, we don't have zero knowledge anymore because the verify could just decrypt the ciphertext now. On the other hand, the problem of not sending this K is that the verifier won't be able to run the verification algorithm anymore, because when we think about it, the CS non verify algorithm requires the input message. So it requires a secret key K to really run the verification algorithm. So the verifier doesn't work anymore. So the second attempt to kind of fix this idea is that we are going to remain. We're going to maintain the same procedure up to this point. So the prover is still going to create this signature. And the question is, how do we make the verify or verify this Sigma? And to this end, what we're going to do is that we're going to think that we have, we're going to use a non compact. And we're going to prove that this signature is valid under the secret key came. And here we're going to be using a non compact. Because obviously we don't have a compact. And if we are happy with using non compact. And I think is we can base it on any like natural standard assumptions like grass side. So to be a bit more concrete, the statement that we're going to be using for this underlying and I think is this X hat. And it's the witness is going to be this secret key K and the signature Sigma. And we're going to be proving that this signature is valid with respect to these three messages. Under the verification key. However, the question is, does this make sense? Because it seems like we're using an IDK to create an IDK. And is this a step forward? And it is, in fact, going to be an IDK in the sense that it's going to have soundness and zero knowledge without any problem. So we don't have an IDK. So the remaining thing is we have to check whether it has a compact proof or not. And recalling what we were using was a non compact IDK for this statement X hat. And unfortunately, since we're using a non compact IDK for this statement, which is going to be very large, we're not going to have a very short proof size anymore. So to look at this, this statement is actually taking this original statement X as input to so the circuit that's going to be used for this non compact IDK to check their relation. It's going to be as large as the original circuit that we're using. And moreover, for our specific constraint signature scheme, this verification key is actually going to be much larger than the circuit size. So in the end, the proof size is going to scale at least as large as the circuit size time polylondon. So it's there's really no point of doing this very complex way of going through this constraint signature scheme. Because then if we were happy with this C times polylondon, then we could have just used any non compact IDK to start out with. However, the main observation here is that by leveraging some additional features of our constraint signature scheme, we can actually redeem this idea and get something nice. So this brings us to our final idea of using an online offline efficient constraint signature scheme. And the idea is that we're going to make this online computation light and pushing all the heavy computation to the offline phase. So what we mean by this is that recall that in a constraint signature scheme, there's going to be a verification key and a message. And we're going to provide a very heavy offline computation phase, which will compress this VK and M into a single VK M. And this verification key is now binded to this message M. And the good thing about this is that this VK M itself is going to have only a fixed polynomial size. And you can verify a message without having this message M anymore by just only using this verification key, which has this message M somehow encoded into the verification key. So what's going to happen now is that you have a signature for a message M under this verification key and we are going to aggregate this in a very computational heavy offline phase. And we get this aggregated VK XCTK. And now this is fixed polynomial size. And then the verification itself is compact because the verification algorithm only takes this aggregated verification key as input. So now what happens is that the verification key is very small and the verification algorithm itself is very small now. So it's going to have a very efficient online phase. So the question is, is this enough? Unfortunately, it's not because when we look at this, this verification key, it has a K attached here. So the verifier does not know K. So the problem still remains there. So V cannot construct this aggregated verification key on its own. So in the NIC proof, the verifier won't be able to even possess this aggregated verification key. And this brings us to our final idea of using a decomposable online offline efficient constraint signature scheme. So what we want, what we mean by that is that first of all, the signature again is with respect to this message. And this K was the part that we wanted to hide. And these X and CT are the very large part, which is public right now. So what we are going to do is that we're going to have an offline phase which partially aggregates the verification key with respect to part of the message. So we are only going to compress or aggregate the verification key, which, which depends on this X and CT part. So all the verification key components, which kind of corresponds to this SKE key will remain untouched. And this part can be done publicly. And then we are going to use a non-compact NIZK internally and viewed this part as the statement. And the witness K, the secret key K, we are going to compress this further into this fully aggregated verification key. And we're going to do this in within this NIZK, this non-compact NIZK. And we are further going to view the signature as the witness again. And we're going to verify that the signature is valid under this fully aggregated verification key and prove that it's going to output an accept. And all this part is going to be done within a non-compact NIZK. And the great thing about this is that all this computation only depends polynomially on the security parameter and does not depend anything on the statement X or the circuit C. Because the statement here, the verification key is now fixed polynomial size thanks to this aggregation phase. So everything that gets fed into this and the witness, everything is fixed polynomial size right now. Now I'm bringing this whole compact constraint signature with decomposable online offline efficiency into our compact NIZK story. This is our final construction. So as before the prover is first going to sample the secret key, it's going to encrypt the witness with the secret key, and then it's going to sign. And after that what it's going to do is that it's going to aggregate this verification key with respect to the part of the message, which is X and CT here. So it's going to partially aggregate this and the verifier can do this too later on with knowledge of the statement X and the cipher text, which is going to be sent. And then the prover is further going to run this non compact NIZK to prove that this K will be aggregated as this verification key to get the fully aggregated public verification key. And then that verification key is going to make this signature valid. So this whole part is going to be very compact. So in the end this proof pie hat is going to be a compact proof and the ciphertext is only going to have additive overhead. So in the end this whole proof pie is going to be compact. And we will note that our constraint signature scheme has all these very nice properties. And no non-efficiency. So RCS is for NC1 circuits only. So when we look at this the function F was taking the decryption circuit and this NP relation C. So if the original NP relation C was an NC1 and further RSK decryption circuit is NC1, which is the case for CDH, then in fact what we get is a very compact proof. It's going to be a witness size plus poilando. However, this still works for circuits beyond NC1 because we can use these witness expansion techniques. So what we do is that we view the internal wire values as witness again and we kind of decompose this whole circuit into a very low depth circuit. And doing that all the internal wires will be converted into the witness. So the witness size will now be C, the size of the circuit. But it will result in a proof size which is circuit size plus poilando. And as some additional results we do get perfect NIZK and use the NIZK with proof size which is witness size times poilando from the dealing assumption. However, one caveat is that our scheme is only for NC1 relations so it will be very interesting to generalize this for all NP relations. And the trivial way of doing witness expansion here does not work anymore because then this W will just be replaced by the circuit size C times poilando and then it will just be worse than prior results. So this is the summary and this will be the end of the talk. All the questions will be answered by rail. Thank you for listening.