 Hey, everyone, thanks for coming. I'm presenting joint work with Willie, Ron, Daniel, and David. And I'm going to be talking about reusable designated verifier noninteractive zero-knowledge arguments. So I'd like to thank David for giving me his slides. So the usual disclaimer, all the good things are his, all the bad things are mine. And on that note, let's get started. So I'm going to be talking about noninteractive zero-knowledge arguments. And just to remind you, this is a setting where a prover wants to convince a computationally-bounded verifier that some statement x is in some NP language L. He's allowed to send a single message. So the proof is one message. And it should satisfy completeness. So a correct proof on a true statement should convince the verifier. Soundness, an even maliciously-generated proof on a false statement should still not convince the verifier. And to satisfy zero-knowledge, namely that a correct proof on a true statement should be simulatable given only the instance. It should be simulatable in polynomial time. And the simulator isn't given any sort of witness for x being in the language. So this is a noninteractive zero-knowledge proof or argument if the soundness is only computational. And just to remind you on the state of the art on this object, we can construct it if the prover and verifier have access to a random oracle. And we can also construct it if the prover and verifier have access to a common reference string under certain computational assumptions. So it's been known for a long time that you can do this under QR refactoring assumptions or from bilinear maps. And then just this year, it was shown that variance of the learning with errors assumption, first with the circular security assumption thrown in and then without, also suffice to construct noninteractive zero-knowledge. So this is the state of the art on NISIC. And just a couple of remarks on this. So the constructions are all fairly ad hoc and different from each other. And there are notable computational assumptions that we don't know how to use to construct NISICs at the moment. The clearest examples are Diffie-Hellman style assumptions along with learning parity with noise assumption. So in contrast to the state of affairs on NISIC that I just described, what we ask in this work is if, is there a general framework for constructing NISICs? And the setting that we consider this question in is that of a designated verifier NISIC. So this is a relaxation of NISIC in which we make a stronger setup assumption. So instead of just generating a common reference string, we now elect the setup algorithm to generate the common reference string along with a secret verification key that only the verifier gets. So this helps the verifier to verify proofs and soundness is only guaranteed to hold when the prover doesn't know what the secret verification key is. And so I'd also like to point out there's some subtlety here. Because the verifier has a secret, there's a question of whether or not you allow the prover to interact with the verifier multiple times. This kind of interaction might give something away about the verifier's secret. So in this work, we're considering a soundness notion in which the prover gets query access to a verifier. And therefore, we have to rule out these sorts of attacks where you learn the secret key by querying the verifier. So we don't want to have this. So okay, this is a kind of NISIC, you can define it. It's not clear necessarily why you should care. So two things I wanna say are that first of all, it turns out these kind of NISICs are still useful for some of the classical applications of NISICs. So the clearest example of this is the CPE to CCA security transformation for public key encryption. So that still works with this kind of NISIC. And additionally, depending on your setup assumptions in the kind of NPC that you're doing, this also might suffice to boost semi-honest to malicious security for NPC. And I'll touch upon that a little bit more later. In addition, just from a theoretical perspective, this is a relaxation of a NISIC. It seems like it should be easier to construct. You can ask, well, is it really easier to construct? Do we know how to do it in other ways besides just constructing a NISIC? And until very recently, the answer was no. Any construction of a designated verifier NISIC that we had was effectively just constructing a publicly verifiable NISIC. So in the last Eurocrypt, the things changed. There was a construction. There were constructions from the CDH and DDH assumptions, which we don't know how to use to get non-interactive zero knowledge in the publicly verifiable setting. Again, this is an ad hoc construction. And in this work, we're interested in how generically we can construct this object. And finally, what I wanna say is that if you relax even further and only ask soundness to hold when the prover doesn't have any oracle access to the verifier, then the problem becomes easy. You can construct that kind of designated verifier NISIC from any public key encryption scheme, which is significantly more general than these other objects. So if you relax far enough, the problem becomes easy, but it's not clear how difficult constructing this object should be in the specific setting that turns out to be useful, for example, in these transformations. So that's where we are, that's the question, that's why we might care. And in this work, we give a generic construction of designated verifier NISICs, satisfying this reusable soundness from a form of attribute-based encryption. And I'll define exactly what kind of ABE we use later, but I'd like to note that it is a single key ABE and not a collusion resistant ABE. So this is not a difficult object to construct. There are many constructions of the objects that we use. So this ABE satisfies some sort of CCA-like security property. So the main takeaway point that I want you all to get is that through this transformation, we're able to apply some recent techniques for constructing CCA-secure encryption in order to get designated verifier NISICs as well. And so on an intuition level, like a CCA-secure encryption is sort of like a standard public key encryption along with some sort of designated verifier NISIC proof of some consistency about the cypher text. So what we're able to do in this work is use the techniques for constructing CCA-secure encryption in order to get a general purpose designated verifier NISIC as opposed to morally a designated verifier NISIC for a specific cypher text language. So this is the main takeaway. In a little more detail, we define this object that we call weak function hiding single key ABE and we show it implies designated verifier NISICs. In addition, we actually show that assuming public key encryption, the two objects are equivalent. So this is actually a reformulation of the problem, just a complete reformulation. And then on top of that, we note that a standard LWE-based, actually based encryption almost satisfies the security definition that we care about and a slight tweak to the standard scheme will get us the security property we want. So once we've proved this equivalence, this gives us an easy way to construct, assuming the equivalence, this gives us an easy way to construct designated verifier NISICs from LWE. So you could of course ask for a lot more, right? Like I was saying, like single key ABE is not too hard to construct. So you could ask if this function hiding single key ABE could be constructed from any public key encryption. So that would resolve a big open problem and that would be super nice. We weren't able to do that, but we were able to show that this kind of ABE can be constructed from public key encryption along with any secret key encryption scheme satisfying a weak form of KDM or circular security. So if this looks familiar, it's because this is the same pair of assumptions that were used yesterday by Kitagawa, Matsuda, and Tanaka in order to construct CCA secure public key encryption. So we show that the same assumption suffice to get full-fledged designated verifier NISIC. And then there are instantiations of these two building blocks from all of the usual suspects in terms of public key assumptions. And so this gives a unified approach for getting designated verifier NISICs from all the assumptions that you would probably think about. And among these three, there was like LPN specifically. We didn't know how to construct designated verifier NISICs from LPN at all. And so this approach gives the first such. So these are our results on designated verifier NISICs. I do want to talk about one of our extensions to a sort of malicious setting. So one problem that you might have with designated verifier NISICs is that it makes a very strong setup assumption. Not only do you have to set up a common reference string, but you have this strange setup in which you have to give the verifier a secret key and make sure the prover doesn't know anything about it, which is a little strange. So one way to mitigate this problem is to change the model. And so in a Eurocrypt this past year, Kwak Rathloom and Wix defines the malicious designated verifier NISICs. So the model is that the prover and verifier have a common random string. The verifier is allowed to pick his own secret key and send some public key to the prover. And then the prover should be able to prove arbitrary statements given just the common random string and the public key that the verifier picked. So zero knowledge should hold against malicious verifiers here and soundness should hold as before. So this is a strictly stronger object than a designated verifier NISIC. And it's affected, like by another name, this thing is just a two-message zero knowledge argument in the common random string model where we want to be able to reuse the first message across many executions. So our results for designated verifier NISIC translates entirely to the malicious setting. All you have to do is take every instance of encryption from before and replace it by an OT. I am, of course, skipping over a lot of details there, but that's the main point. So we get a generic construction of this malicious designated verifier NISIC from a two-message OT along with the KDM Secure Secret Key Encryption. And again, this can be instantiated from the usual suspect concrete assumptions. And in this setting, both the constructions from CDH and LPN are entirely new. There were no constructions of this object from either of those before. And finally, I want to mention that this malicious designated verifier NISIC has connections to reusable non-interactive secure computation. And in particular, it allows you to do reusable non-interactive secure computation from any of these assumptions. I'll refer to the paper for more details on that. So that's all I want to say on our results. For the rest of the time, I want to tell you a little bit about how we prove it. In particular, I want to focus on this first arrow, which I think of as the main contribution being this arrow and the formulation of the ABE that turns out to be useful. So I'm gonna tell you both of those. So to start, I want to recall the designated verifier NISIC from any public key encryption scheme that satisfies the weaker form of soundness, the one-time soundness. So this is due to Paschalat and Vikuntanathan, and it works like this. We're going to start with a three-message protocol and we're going to compress it into a one-message protocol. So the three-message protocol should have a one-bit challenge. Just for the purposes of this talk, we can compile more generally than we do in the paper, but for explaining, let's assume we have a single-bit challenge. So the three-message protocol is some commitment, some one-bit challenge from the verifier, and then some response from the prover given the challenge. We're going to compress this into a one-message protocol. So here's what we do. We're going to give the prover two public keys for a public key encryption scheme, and we're going to give the verifier a random bit B along with exactly one secret key out of the two corresponding to those two public keys. And we're going to give them secret keys of B where B is the random bit. So why would we do this? Well, we're going to use this to sort of simulate the three-message protocol with challenge bit B. The verifier is going to be able to obtain a transcript of the three-message protocol using challenge bit B. So the way we implement this is that we have the prover send over a commitment along with two ciphertexts. Ciphertext zero is going to be an encryption of the response for challenge zero, and ciphertext one is going to be an encryption of the response for challenge one under the two different public keys. So then the verifier is able to decrypt exactly one of these two ciphertexts, the one corresponding to B. This will give him a transcript A, C, Z of the three-message protocol with challenge bit B, and then he can accept if that three-message transcript looks good. So this will give you soundness one half. You can repeat many times in parallel to get negligible soundness, but I'm going to forget this for the rest of the talk and just think about the soundness one half case. So here it is. You can show that one, it satisfies zero knowledge, and two, that it satisfies soundness if you don't give the prover query access to the verifier. And the reason for that soundness holds is essentially the prover doesn't know what this bit B is. It's just a random bit, and so you can reduce from the soundness of the three-message protocol because of that. However, this protocol does not satisfy the kind of soundness that we want. And the reason is that by querying the verifier, you can actually learn what this bit B is. And once you learn what the bit B is, soundness is entirely compromised. If you think about the three-message case, this is clear. If you know what the challenge is going to be in advance, then you don't have soundness. So okay, so what's the issue? I want to say that fundamentally, the problem is that the verifier has this fixed random bit B that he's stuck using for every proof that the prover sends over. This is fundamentally the problem. So the way that we get around it is we set things up so that there isn't a single random bit B, but actually an independent random bit B for every possible statement that the prover could be trying to prove. And intuitively at least it might seem reasonable that if you could do something like that then you might get reusable soundness. So the way that we implement this is by using a form of attribute-based encryption which very briefly is a way to encrypt a message M under an attribute X so that a decryptor can have a secret key associated to a function on attribute space, a function F, so that the decryptor can learn the message if F of X is equal to one and the decryptor learns nothing and you in fact have a form of semantic security if F of X is equal to zero. So this is attribute-based encryption. So here's how we're going to use it to compile the sigma protocol. So again, recall we want to somehow have a different bit B for every statement. So the way we're gonna do this is we're gonna give the verifier the seed S to a PRF and in our heads we're gonna think that the bit B corresponding to statement X should be the PRF evaluated on X. That's what we want, we wanna implement this somehow. So we're gonna give the prover the master public key to the ABE scheme and then we're going to again have the prover send a commitment along with two ciphertexts and the ciphertexts are going to be encrypted under different attributes. So ciphertext zero will be encrypted under attribute X comma zero where X is the statement and ciphertext one will be encrypted under attribute X comma one. The verifier is going to have an ABE secret key that allows him to decrypt and obtain the message if the attribute pair X comma B respects the PRF. If B is equal to PRF of X then the verifier is gonna be able to decrypt and if not then he won't be able to decrypt. So that's the basic idea and right off the bat you can show just assuming that the ABE scheme is semantically secure that this gives you zero knowledge. This is enough to give you zero knowledge. Soundness is trickier. Intuitively we want to say that hey each bit for each X is coming from a PRF. It looks random and independent of the others. And so as long as the prover doesn't query the verifier on the statement X, one could hope that the PRF security would guarantee you that the prover doesn't know anything about that bit B and therefore that soundness should hold on that statement. That's the hope. So this doesn't actually work and in particular for specific ABE schemes you can write down a tax that breaks soundness. Fundamentally the problem is that because you have query access to the verifier this effectively means you have query access to an ABE decryption circuit. Sorry, query access to an ABE decryption oracle. And the ABE decryption oracle is using a secret key that knows what the PRF seed is. So in principle this could leak information about the PRF seed. So this won't happen if you feed the decryption oracle or the verification oracle well-formed ABE ciphertext because if you give it well-formed ABE ciphertext then the correctness of ABE decryption will tell you that nothing about the seed is leaked. But if you feed it malformed ABE ciphertexts it's actually possible to implement the tax where you leak the seed of the PRF and therefore you compromise soundness. So that's the problem and that's why we don't get a construction from any old attribute-based encryption. But what we then do is we define a stronger form of security on the ABE that rules out this attack, this particular attack, and then we show that this form of security is actually enough to make the soundness analysis go through that if you have the security then the protocol I wrote down is actually sound. So intuitively it's effectively what I just said. So we have some security game where you have query access, where an adversary has query access to a decryption oracle. The decryption oracle is using an ABE secret key associated to a function F. So you can ask, well, what can this adversary learn? Well, if you query on a ciphertext with attribute X, you're definitely going to learn whether F of X is equal to one or equal to zero because if you get out a valid answer you know F of X is equal to one. So the security property that we ask for is that effectively this is all that you learn about the function. So we want to argue or we want to assume that this entire interaction that I'm talking about could be simulated just knowing the input output behavior of the function F along with the master secret key of the ABE scheme. So the thing that we care about not compromising here is the implementation of the function and in particular the input output behavior of the function on attributes that you haven't queried. That's the kind of security that we're looking for. So then we show that if you have an ABE scheme satisfying this form of security then the soundness analysis goes through that the function hiding of the ABE that I just described along with the PRF security and the soundness of the three message protocol together will give you soundness of the one message protocol that I've written down for you here. So that's a simplified variant of our main construction. Then as I said before we can implement this kind of ABE from LWE using basically a known scheme. And in addition we can construct this kind of ABE from quite generically from public key encryption and KDM secure secret key encryption. And very briefly the way this is done is we first construct plain single key ABE from public key encryption. This is old. And then we amplify the security of the ABE so that it will satisfy this kind of function hiding that we care about after throwing in the KDM secure secret key encryption. And this is very similar to the construction of CCA secure encryption or the two constructions given by Copula and Waters and by Kizugawa, Matsuda and Tanaka both in this conference. So that's all I wanna say on our results just to conclude, our question was is there a generic framework for constructing designated verifier Nizix? And we make progress on this question by reformulating the problem in terms of constructing a certain kind of attribute based encryption. In terms of open questions, you could ask well the big question is can we construct this form of ABE from any public key encryption that would resolve the big open problem? You might say well look, we're using a CCA like security property so why would you expect this to help? But I'm happy to talk offline on some thoughts about why this reformulation might still be useful. More modestly you could ask if you could construct this kind of ABE from CCA secure public key encryption. So that would be like a converse than our young theorem saying that CCA security and designated verifier Nizix are really the same object. And then finally I wanna ask if we can do anything based on one-way functions. So maybe not constructing a designated verifier Nizix, but some weak form of Nizix satisfying reusable soundness from one-way functions. I think that the techniques from this paper might also be useful for answering that sort of question. That's it, thank you. Questions? Okay yeah, if you have a question you can ask him offline. Let's thank the speaker again.