 Thank you for the introduction. So my name is Shu, and this is a joint work with Rio Nishimaki, Shota Yamada, Takashi Yamaka from NTT and ICE. Thanks, everybody, for sticking around until the last session. So as the title suggests, this is going to be a talk about compact NIZKs from various assumptions. So our results can be broken up in four pieces. They're all based on NIZKs, and it's about getting compact notions of NIZKs. So the first three is going to be about short proofs. And the last one is going to be about efficient provers. And for this talk, I'll use probably 85 through 90% of my time to talk about the first two results regarding short proofs. And the last 15% of the time, I'll use it to explain what an efficient prover NIZK is. So as a background on this, I'm sure that everybody saw this at some point during this crypto conferences. But let me just give a brief overview of what NIZK was. So here, a prover wants to prove to the verifier that she knows a witness corresponding to the statement X in this language. So completeness tells you that, well, this proof pie should verify correctly. And for soundness, what we have is for a cheating prover. So if she has a statement, which is not in the language, then it should not verify. And for zero knowledge, we just say that the proof pie does not leak any information on this witness, other than the fact that this statement is really in this language or not. So when you have these three properties, then you get an NIZK. So we'll be considering various types of NIZKs. And this is, I think, the one that's the most standard one, which is called the CRS NIZK. So the trusted setup here will construct a public common reference string. And in this model, any prover or any verifier can participate using this CRS. And we can consider a relaxation of this, where the trusted setup is now going to provide a private verification key for this particular verifier. And that's why it's got a designated verifier zero knowledge. So in this world, anybody can be a prover if she has a witness. But the person who can verify this proof pie is only going to be this designated verifier in hold of this KV, the verification key. And obviously, you can consider the opposite flavor of this, which is now the private proving key setting, which is the designated prover NIZK. And finally, you can also consider this relaxation where both of them, now the prover and the verifier both has private information. And this is called the pre-processing NIZK. So when you think about it, let's say you're in the pre-processing NIZK and you find a way to get rid of this KP and you were able to make this public, then you could upgrade that into a DV NIZK. And from a DV NIZK, a designated verifier NIZK, if you get rid of this verifier secret information, then you get a CRS NIZK. So you could build up your scheme if you can get rid of these components. OK, so that's the very basic background. And I'll be talking about NIZK with short proofs now. And this is part one of this part one. And I'll be talking about CRS NIZKs with short proofs. So the motivation for this part is that so far, so CRS NIZK that we know all have like proof size that are, so the proof size that has independent of the circuit size C-computer-NP relation requires unlike strong tools. So if you want this proof size to be very small, then we require IO, FHG, knowledge assumptions or compact homomorphic trapdoor functions where these require LWU assumptions. So the thing is that without using these strong tools, what we know is that the famous, let's say, the growth of CRS NIZK, it has proof size that is lambda times the circuit size. And the shortest one we know that doesn't use these kind of assumption is the one based on growth 10 at an Asia Crip. And this requires a polylog lambda time circuit size, proof size. So when you look at this, everything requires a multiplicative overhead in the circuit size. So the question we wanted to ask is that can we bring this down to an additive overhead? So can we make it into a circuit size C plus polylambda? And that's what we did in this first part of this work. So we construct the first CRS NIZK based on a falsifiable pairing group assumption with proof size, which is C plus polylambda. So the starting point of this work is that we consider this DP NIZK that we proposed at EuroCrip this year. So it's going to be based on this KNYY19 construction of DP NIZK with short proof size based on the CDHR assumption. And this is just a non-static Diffie-Hellman type assumption, which is secure in the generic group model. So this is a falsifiable assumption. And our approach will be, as I explained, it's going to be from trying to convert this DP NIZK into a CRS NIZK by trying to get rid of this designated prover like private key here. So this is a very, very general high-level idea that what we want to do here. So I want to make a quick review on our work at EuroCrip. So our approach was using Kimu's conversion from Crypto18, which allows you to convert any compact moment signature scheme into a designated prover NIZK. So the main contribution of this work was that they created a new compact HS from the CDHR assumption. So following this path, they were able to get DP NIZK from this assumption, the CDHR assumption. And in retrospect, what we kind of observed from this construction was that there seemed to be a lot of nice properties that weren't used exactly in this DP NIZK conversion. So a natural thought was that can we use these nice properties and add it to this homomorphic signature and maybe construct a CRS NIZK? And I want to tell you that this path is a little bit kind of difficult. So even if you have a homomorphic signature with extra great properties, it doesn't seem to be applicable to the CRS NIZK setting. And for that, I want to explain why this Kimu conversion is only very limited to the designated prover setting. So I won't get into detail, but what the Kimu conversion does is that you have, as a trusted setup, it's going to sample a secret key from the secret key encryption key space. And it's going to run this homomorphic signature setup scheme. So it's going to sample a verification key and a signing key. And it's going to sign on this secret key SKE and get this signature. So the CRS is now going to be this verification key and the prover key is going to be this SKE key and this signature sigma. And it's really not that important. But after this, Kimu technique allows you to get a DP NIZK using this as the CRS and this as the proving key. So now I want to explain why this is quite difficult to make it into a CRS setting. So to make this in the CRS setting, what we have to do is that the prover somehow has to sample this on her home. But obviously when you look at this, that's going to be pretty hard because if she samples this on her home, that means that she has to run this morphic setup homomorphic sign on her home too. So it seems that we can't use the security notion for this HS scheme anymore. So we don't get soundness. And even if there was a way to kind of get around this issue, it will still be difficult. Because now since the malicious, well the prover is going to be running this whole thing on her own, then she will have to send this VK on her own too because it won't be able to be in the CRS. And the actual verification key that we constructed in this KNYY19 work has size that is lambda time circuit size. So we have to plug this into the proof now. So we kind of lose compactness at the same time. So it seems kind of difficult to get a CRS nizk from this approach. So what we asked in this work is that is there like an alternative notion to homomorphic signatures to overcome this issue? And that's what we basically did. So in our work, we formalized the notion of homomorphic equivocal commitment, which is kind of similar to homomorphic signature, but it requires to go around this problem that I just mentioned. So the main syntax is very easy. You can commit to a key or a bit string with this random sr is the opening. And what you can do is that you have this homomorphic property. So you can homomorphically convert this commitment into this evaluated message C of k. So you have this k here and plugging this circuit C inside, you get a homomorphic evaluated commitment C. Now this is going to be a commitment to the Ck. And also you can create a homomorphic opening for this commitment C, taking this original random sr and this circuit C here. And the informal guarantees we want from this scheme is that for soundness, when we use it for the nizk, we want this binding and hiding property, which is a very standard thing to require for a commitment scheme. So we want this to be binding and these guys to be binding too, the evaluated commitment to. And for the compactness notion, what we require is that we want the verification complexity of this commitment C to be independent of the circuit size C. So informally what that means is that this commitment C and this opening randomness C, the evaluated version of these, are going to be much, much smaller than the circuit size C. And in particular in our construction, this will be only constant numbers of group elements. And using this HEC, it's really easy to get CRS nizk via this Kimu conversion now. It's a little bit different, but the main idea is the same. So the red part is going to be the part that's different from the original one. So for the CRS, it's just going to be this evaluation key. And the prover now is going to sample this SKE secret key on her own and she's going to commit to it. Then she's going to encrypt this. So this part's the same as the original Kimu conversion. So she's just going to encrypt this witness with her secret key and she's going to construct a circuit which on input this secret key, it was just going to compute this guy. So it's going to first decrypt this ciphertext and get this witness back and check this relation. So if this is equals to 1, then this circuit is going to equal to 1, too. So what that means is that if you homomorphically evaluate this commitment on this circuit, it's going to be a commitment on 1 now. So finally what you do is that you just use a non-compact CRS NISC to prove that this commitment opens to 1 again. And here we can use a non-compact, so not a short CRS NISC proof, because the compactness allows you to kind of prove this, which is independent of this original circuit size. And you just output this. So this is going to be the main construction. And in a nutshell, what happened is that we obtained an HEC from this CDHR-based HS of our previous work. And at a high level, what this provides you is that using an HEC homomorphic equivocal commitment, you can convert any non-compact CRS NISC into a compact CRS NISC. So you can view it as a generic conversion using this HEC. So I don't have enough time, but I'll go into the next part now, part two. It's about DV NISC. So this has a different motivation now. So recently at EuroCorp, last year, Korto, Hofheinz, and us, and Quach, Rother and Wicks, they all presented the first DV NISC based on the CDH assumption. However, they relied on this FLS NISC, which uses the graph Hamiltonian problem. So the proof size becomes very large when you try to construct a concrete NICK from this. And essentially, it's going to be polynomial in this lambda in this circuit size. So the same question again. Can we make this into a circuit size C plus polylambda? And that's what we did this in work. So we're going to base it on the same CDH assumption, but we're going to be able to get it down to C plus polylambda. And the starting point, again, so at a high level, it's the same thing. So we're going to base it on this preprocessing NISC that we constructed at EuroCorp last year. I mean, this year, I guess. And we want to get rid of this prover key somehow. So what we did last time was that we constructed a context hiding homomorphic MAC. And using this Kimu compiler, we compiled this homomorphic MAC into a preprocessing NISC. So now the natural question is, it's the same thing. So can we bootstrap this preprocessing NISC into a DVNISC by getting rid of this prover's secret information? And you might think that we can just use the homomorphic equivocal commitment here and just do that. But the thing is, we don't have that from the CDH assumption right now. So we have to go through some different approaches now. So this part of this work is trying to bootstrap this ppNISC into a dvNISC just by using CDH. So let me just kind of get into detail what we did in the previous work for ppNISC. So it's really not important, but we have this verification key. This is going to be the homomorphic MAC key, which is going to be a finite field element like this. And for the proving key, it's just going to sample the secret key again. And this is going to be the signature. It's really not important why it looks like this, but it's just the way it is. And after that, using this kv and kp, we just do the Kimu conversion. So now a first attempt at getting rid of this proving key is that we could sample this on our own. We just sample this k on our own and sample the sigma on our own. But you kind of see that if you do this uniquely, define this object r vector here when s is fixed. So this r really can't be programmed in advance. So when the prover samples this, then at that point r is defined. So we have to have a mechanism to send this r to the verifier now. Because the verifier needs this r in the verification key to kind of verify this whole thing. But the problem is that the prover doesn't even know what this r is because she doesn't know s. So r is uniquely defined at this point, but there seems to be no way to transfer this r to the verifier. And to solve this problem, we use interproduct functional encryption schemes. And so we use IPFE to implicitly transmit this vector r without leaking this prover key. So what we do is that in the verification key, we're just going to include this secret key SKS for vector 1s. What the prover is going to do is that she's going to sample this key on her own, the proving key. And she's going to encrypt this case sigma with the IPFE scheme. And then after that, using this KP, she's going to just do the proofs. And for the verifier, what he's going to do is that first he's going to decrypt this ciphertext using this SKS and recover r. And he's going to run this verification algorithm after that. And the main observation here is that since it's an IPFE scheme, when you take the inner product between this case sigma, which is encrypted in ciphertext with this 1s, you get r back. And when you work out the equation, this is the r that we really wanted to transmit to the verifier. So this idea allows you to get around that sending r implicitly. And as I remarked, this one key IPFE is implied by public encryption schemes in general. So this kind of allows you to get a public key encryption scheme with, well, I didn't tell you about this part, but we need another additional layer of non-compact DVNISC to prove well-formed this. But this allows you to tell you that if you have this non-compact DVNISC, and if you plug in a PKE there, you can generically convert it into compact DVNISC. And finally, I don't have enough time, but I'll try to walk you through this last section about efficient provers. So this part is quite different from the previous ones, so it's a completely different motivation again. So there are many papers regarding the efficiencies of verifiers, which are basically like snarks or snarks. And what we kind of consider in this work is that, well, as far as our knowledge goes, there seems to be no paper investigating the efficiency of provers. So all NIZKs has prover running time, which is going to be circuit-sized times polylamda. And when you think about it, the prover can always just send the witness. So it seems a little bit of an overkill to require the prover to have this much computation. And so the question is, can we do better than this? And in this work, we show that we can do better than this. We can make this prover running time, which is sublinear in circuit-sized time polylamda. And this part is very easy. So the main idea is that we're just going to use laconic functional encryption, which is a tool recently proposed by Quach, We, and Wix. So people who don't know what this is, it's a very strong and nice primitive that you can base on the LWU assumption. But what it allows you to do is that when you have a circuit C, you can compress this into a digest C, where this digest C is going to encode this information on the C, but it's going to be strictly smaller than the circuit size. And using this digest C, anybody can encrypt and message M for this digest C and get an encryption. And the thing is that this running time of this encryption algorithm is going to be strictly smaller than the circuit size now. And the only requirement is that this encryption hides the message M. And here, anybody can decrypt this ciphertext with this circuit C. So there is no notion of secret keys now. Anybody can do this. And you will get this C of M back, only C of M back. And with this, it's very easy to get a prover efficient N-I-Z-K. So what we do is that in the end, so we're first going to define C as this NP relation R. And we're just going to compress this circuit C and put it in the digest. And for the prover, what she's going to do is that she's just going to encrypt her statement X and her witness W with this digest C. And LFE tells you that this is going to be strictly smaller than the running time. It's going to be strictly smaller than computing the circuit C. And also, this is going to decrypt to 1. So anybody can decrypt this to see that it's going to equal to 1. But the thing is, we also have to append with well-formedness proof with a prover N-efficientness. So this could be any N-I-Z-K. And this is because this encryption is going to be strictly, the running time is going to be strictly smaller than the circuit size. So we don't have to have a compact or prover efficient N-I-Z-K here. We can just plug it in any, let's say, growth also of ski-sci N-I-Z-K here. And this is it. So the verifier is just going to check this proof. And he's going to decrypt this. And if it equals to 1, then he knows that it was a valid proof. And yet again, so this can be viewed as a generic compiler again. So if you have a prover non-efficient N-I-Z-K, then you could add an LFE there and get a prover efficient N-I-Z-K. OK, so this is our conclusion slide. Thank you for listening. Questions? OK, let's thank the speaker again.