 Hello, my name is Ores Habubes and I'm really happy to be presenting this joint work with Julio Malavolta titled the Run Complexity of Quantum Zero Knowledge. To begin with, I'm going to start with some introduction about what is zero knowledge and our contribution to its run complexity. So, it's a zero knowledge protocol, probably needs no introduction but it's basically an interactive protocol where someone, a prover, can prove the rest of a statement to the verifier while revealing nothing beyond that. So, let's say we have a prover that wants to prove that x is in the language and if we assume he has a witness w, we want for the verifier to learn nothing about the witness. Now, a very well-studied problem concerning zero knowledge is the number of messages exchanged in the protocol or in other words, the run complexity of the protocol. This is a very high interest in the cryptographic community and actually it has been shown that any and the statement can be proven even as a US-4-runs interaction. For completeness, I also want to mention this recent result that that sees zero knowledge in just three rounds, although it was not post-quantum secure and this is going to be relevant later. Now, I'm talking about zero knowledge for NP statements and what NP is is a complexity class where it contains all the decision problems that if the answer is yes, we have proofs that are verifiable in polynomial time by using a deterministic attorney machine. But if we take a look at this complexity zoo figure, we also notice that there are complexity classes and we can consider the quantum analog of NP, which is QMA, or quantum algorithm, which is the complexity class which contains all the decision problems that if the answer is yes, now we have a polynomial time quantum proof that can still convince a quantum verifier with very high probability in polynomial time. So an immediate question we have is do we have zero knowledge for QMA statements and the answer is yes, we do. So, quantum zero knowledge is pretty much the same notion, but now again we have a quantum statement, so x is quantum and the proof of the verifier can also be quantum, the messages also can be quantum. Now, in the quantum setting, the best time results we have for round complexity is in constant rounds, introduced in 2020 by this paper from butaskins made. And what we wondered is whether QMA statements inherently introduce additional rounds of interaction and that's what we focused on this paper and we showed that this is not the case. So here are results, first of all we constructed a two-round statistical witnessing this usability argument for QMA, which I'm going to define later, but then using this we were able to compile it into a fully-fledged zero-knowledge argument in just four rounds at setting statistical zero-knowledge. Finally, we also moved our research into the timing model. The timing model is just assuming that parties can measure the lapse of time and use this in the protocol and we construct two rounds of knowledge arguments, both computational and statistical depending on the assumptions. And in this talk I'm going to focus on these first two results, but before getting into any constructions, I do want to focus and define SBSA's commitments because it's a tool that we use a lot in this paper. Now, most of you are familiar with commitment schemes. In a commitment scheme we have a sender and a receiver and the server comments to a specific value and in the commit phase that he can later reveal to the receiver. Now, what we want from a commitment scheme is for it to be hiding, which means that the message M remains hidden from the receiver during the commit phase. Or in other words, if we have commitments of two different messages, they are indistinguishable from the receiver. Now, we also want for the commitment scheme to be binding, which means after the sender commits to the message M, he can only open the commitment to this message M and not be able to trick the receiver by switching the message. Usually, most commitment schemes can be either statistically hiding and computationally binding or computationally hiding and statistically binding. But what we use is sometimes binding, statistically hiding, or SPSH commitments introduced in this paper here, where you can see that the commitment phase takes three rounds of interaction and, well, as their name suggests, their commitment schemes that they are statistically hiding, which means that the commitment of two different messages M0 and M1 are statistically indistinguishable from the receiver. But they also all are sometimes binding. What does that mean? It means that there's a very small and negligibly small probability for the commitment to be perfectly binding. And if that's the case with very small probability that they're binding, then we can extract the message M with a straight line instruction. So that's pretty much it. And we're going to see how we use them getting into our first construction of witness indiscusability arguments. Now, what is witness indiscusability? Well, it's a weaker notion of zero knowledge. Again, we have the same setting. We have a prover and the verifier and the prover wants to prove some statement X in the language. But instead of trying to achieve zero knowledge, what we're trying to achieve is for the verifier to not be able to tell the difference between two valid witnesses used by prover. And this is exactly what we construct for PMA. But before getting to the construction, I also have to introduce another weaker notion of zero knowledge or maybe a relaxation of zero knowledge, which is sigma protocols. Now, sigma protocols, it's just like zero knowledge, but we have to assume an honest verifier. So a sigma protocol usually looks like this. We have three messages. First of all, the prover sends a commitment alpha, then the second message is a challenge beta by the verifier. And then the prover sends his last message, gamma, which is a response to the challenge, and that's persuading the verifier. Now, this instance here that we see is a quantum sigma protocol, which was introduced in this paper by Brunt and Grillo. And a very important property that we're going to use is that the computation of beta and gamma is completely classical. So it's just the first message is quantum. Now, in their paper, they prove computational zero knowledge and statistical soundness. But if you remember, when I introduced the results, I've been talking about statistical wi and statistical zero knowledge. So in order to do that, we have to extend this to statistical zero knowledge, the sigma protocol as well. And of course, computation soundness. Now, this is not a trivial hack to do it. And I don't want to get too much details, but so in their protocol, they use statistically binding commitments. And these commitments, they cannot be used in order to prove statistical zero knowledge. So what we do is we replace your commitments with SBS commitments. Now, these commitments are statistically hiding. So we're able to get statistical zero knowledge. And they're also sometimes binding. Now, since they're sometimes binding in the soundness proof, we have to set the parameter of further derivatives accordingly to still make it secure. And in order to get computational soundness, since we have on this very first knowledge, we can perform a part repetition of the protocol and still make it secure and have the consumable soundness. Now, having that, I also need to introduce some additional tools and then get into the actual construction. So first of all, we also need a pseudo random function. Now, pseudo random function is a protocol where there's a generator that produce a key. And then given the key, we can input it into a function that produces a seemingly random string. We also use an FAT or full form of encryption scheme. In this scheme, we're able to perform computations under encrypted data. So let's say we have two parties, Alice and Bob. Alice can send encrypted messages, and then Bob can compute evaluations of the message under encryption without having to first decrypt the message. And then finally, we also use a witness in these, which are really good argument for NP, which is non interactive. So we can just do it in one round. And for MP, these exist. Then here are some constructions. Now, using all this and using the Sigma protocol with statistical zero knowledge, we finally arrived at reconstruction, where the main idea is that we use the free home morph encryption scheme to round collapse the Sigma protocol. So here, as you can see, set of three messages, we have two. And in the first message, the very far sends an encrypted pseudo random function key. And then the prover sends both of his messages at the same time. So given this key, he can sample a talent using the PRF. But of course, he has to do that under encryption homomorphically, because he cannot know the talents. And then after he's done that, he can compute his service methods alpha and also he can compute his response gamma using the talents. But of course, in order to do that, he has to do it homomorphically under encryption. That's why he used the FAT. And then after he computes both of his messages, he sends them back to the verifier. Now, the verifier has the secret key of the encryption scheme. So he can decrypt it and get gamma. But he can also recompute beta and then he has the whole transcript and he can check that the Sigma protocol is valid. In this protocol, we repeat these messages twice. And the prover sends a WI proof that shows that at least one of the two instances was computed correctly. Now, a WI foreign P is sufficient because beta and gamma, the last two messages of the Sigma protocol are both completely classical. So this is enough. And that's pretty much it. We also here include NSB state commitments of the randomness of the randomness used in homomorphic valuation, which is necessary for the soundness for the soundness proof. I'm not going to get into too much detail, but again, since we're using NSB state commitment, still we can achieve statistical WI. But of course, we get computational soundness. And I do want to point out that since we have computational soundness, this is a WI argument, not a proof. And this is a main building block in all of our following constructions. So we have zero knowledge arguments and not proof. But also since we use SP state commitments, if you remember, I said that there's some exponential loss because we have some dense binding commitments. And we have to account for it in the other primitives by setting the parameters accordingly. So in order to still have secure protocols, we have to assume quasi polynomial hardness over the WI. Now again, since we use this protocol as a building block for all our next constructions, all of our constructions have to assume quasi polynomial hardness over the WI. Now I'm going to move to zero knowledge protocols. And before going to any constructions, I want to focus on existing techniques, both classical and quantum, and see what we have to use soundness techniques, how we use them for our protocols. So first of all, I'm going to go over the classical setting in a bit more details. Now this is how a zero knowledge protocol loops in the classical setting. Usually, it's pretty similar to a Sigma protocol, where the prover sends a commitment alpha that contains the proof, then the verifier sends a challenge, challenging the prover. And the prover partially opened his commitment, revealing a part of the proof and thus persuading the verifier. Of course, like the Sigma protocol, the verifier is not always honest, so we have another message in the beginning where the verifier commits to the challenge, so as not to be able to change the aftersync prover's commitment. Now, since a prover cannot see the challenge beforehand, he should really know the proof, and that's how we usually achieve soundness. But how do we prove zero knowledge? And to do that, we have to use a simulator. Now a simulator is someone who doesn't know the proof, and we show that the output of the simulated interaction is indistinguishable from the real interaction. So we show that the simulator can still persuade the verifier without knowing the truth, and thus we have zero knowledge. A very common way of proving zero knowledge is rewinding. So how rewinding works is the verifier in the first message, of course, sends a commitment to the challenge, and the simulator, well, he doesn't know the proof, so he has to send a commitment to some garbage, to some string zero. But the important thing here is that the simulator knows the inner state of the verifier, so he can take a snapshot of the inner state here, and then after the verifier commits to the challenge, now the simulator knows the challenge. Of course, it's a little too late, but since he took a snapshot here, he can rewind back to when he took the snapshot, and then he can repeat the protocol by knowing the verifier's challenge, and now he doesn't have to commit some garbage string to zero, but he can commit to a value that pursues the verifier and tricks him, and thus achieving a transcript that's indistinguishable from the verifier. Now, what happens in the quantum setting that we talked about? And in the quantum setting, the most well-known theorem is the no-cloning theorem, which says that there is no quantum procedure that transforms, like, given one quantum state copies it and outputs two identical quantum states. So in the classical setting, the most basic thing you can do is have a string copied, and you can perform computations of the string and still have an original copy intact. Here you cannot do that, and then the quantum procedure changes the quantum state. So why is that important for us? Well, if we try to do the same thing, as in the classical setting, now we have a quantum verifier, so the inner state of the verifier is also quantum, so the simulator cannot take a snapshot. So even after the verifier reveals the challenge, it cannot rewind, so it seems like he's tough. Now, of course, there are simulation techniques that don't use rewinding, but they still directly or indirectly rely on cloning, and we cannot just extend it to the quantum state. So we need another solution, and one solution was given by these two papers, Anna Thelma Plaka and Butaskin Smely, where they achieve and they construct a linear extraction technique. This extraction technique is the one we're also going to use in the protocol, so I'm going to present it now. But before being able to present it, I need to introduce some necessary tools. The first one is a quantum free homomorphic encryption scheme. I already talked about a free homomorphic encryption scheme, and this is just the quantum analog of it, where it's pretty much the same notion, but instead of a classical message, we have a quantum state as our message. And of course, both parts can be quantum, and Bob can perform, again, homomorph computations for any arbitrary unitary hue. Unitary martyces are what we use in the quantum setting to perform computations. So we have a quantum ACT scheme, and we also use a computing comparability program. Now a computing comparability program, or a CSA program, is parametrized by a function f, a string s, and the message z. And given an input x, this program, sexif f of x equals the string s, and if it does, it returns the message z. Otherwise, we get nothing. Now by using an obfuscator, we can obfuscate this program into the obfuscated program where the implementation is hidden to any adversary. Now finally, we'll also use a CDS, or conditional disclosure of sequence protocol, where in this case Bob can set a message to Alice only if a certain statement is correct. So let's say we have this statement here, conditioned on the witness w that Alice has, so Alice first must send the message that contains his witness, and then Bob sends his message m, but Alice can only get if this statement is correct. Now all of these tools I think are going to make more sense whenever I send you the protocol, which we call the homomorphic trapdoor technique. And this is the main idea, we have a sender and the receiver. The sender in this Reynolds protocol is going to be the very far and they receive it with the prover, just to not have any confusion. So here's how it works. The sender, first of all, sends an encryption to some trapdoor, which is an arbitrary string, and also sends a computer-compared for skated program that on input the encryption of some string s, it returns the message along with the secret key. So we have the encryption of some trapdoor in the CSE that given the right input returns the message. Then the receiver sends a guess why, guessing the trapdoor that was encrypted here. Finally, the sender sends a conditional disclosure of sequence protocol such that if the receiver gets correctly, he gets s, otherwise he gets nothing. And if he gets s, he can encrypt it, put it on the CNC of a skated program and get the message. Of course, this protocol is hiding because the receiver is not going to be able to get the trapdoor because it's encrypted and due to the security of the encryption scheme, with this free home encryption scheme, he cannot guess correctly. But what happens is that if we use an extractor, the extractor again has access to the inner state of the sender, and thus he can perform this message homomorphically, which means instead of guessing the trapdoor, he's going to have to guess the encryption of the trapdoor because everything's going to be under encryption. And he does have the encryption of the trapdoor because that's what the first message sender was. So he's going to get correctly, guess correctly, and then the sender is going to send the CDS that it's going to be satisfied. So he's going to get s. Now, as I said, everything is under encryption, so he's really going to get the encryption of s, but that's exactly what he needs for the CNC program here. So he's going to give the encryption of s and going to get the message and the secret key. And that's the main idea of periodic need. Now, with Dawkins May in their paper, they use this no cloning extraction, and by combining it with a classical zero-knowledge argument and the quantum sigma protocol that we saw before, they achieve concert-run computation zero-knowledge for QMA. What we do in our construction is we use our statistical WI argument, and combining it with the no cloning extraction, we get zero-knowledge for QMA, but in just four rounds. And as a bonus, we also achieve statistical zero-knowledge. And now we have all the necessary building blocks, and I can get to the actual construction. Now, in our protocol, we have a prover and the verifier, and in the first message, the verifier sends a commitment to something zero using randomness r, and then the two parties perform the homomorphic trapdoor technique that we saw before for r as the message m. Now, in that technique, remember, the verifier would be the center and the prover to receive, so this takes three rounds of interaction. And in the final round, in the fourth round, the prover uses our WI argument to prove that either he knows r or that x is in the language. Of course, the prover wouldn't be able to know r because in the homomorphic trapdoor technique, r was hidden. So he has proved that x is in the language, and thus he cannot see it. On the other hand, the simulator can use the extractor and extract r here, and then prove the first clause from the WI argument. You can see that we also include an SPSA's commitment of the guess why of the prover to rule out mulling attacks because it can be extracted with merge talk probability. It's a bit more technical, but the main reason I'm referring to this is to point out that again, we use an SPSA's commitment and still were able to, in order to be able to achieve this clause. Now, this protocol works great for non-alist verifiers, which means it works for verifiers that are non-aborting and explainable. A non-aborting verifier is a verifier that doesn't just randomly abort joint protocol. An explainable verifier is a verifier whose messages can always be explained. That doesn't mean that the verifier is always honest, but it means that his messages are computed in support of honest algorithms. Now, what we need to do is to extend this protocol to also malicious verifiers. And I'm going to start with aborting verifiers. So just for a bit of intuition, the reason our simulator fails is that in the homomorphic trapdoor technique, you can remember everything ranks homomorphically. And in the end, when the sender got the message, he also got the secret key because in the end he had the encrypted interstate of the receiver, and he had to decrypt it in order to get a transcript that's indistinguishable from the real interaction. So the main point here is that since everything is homomorphically, he gets stuck under the encryption. Of course, if he knew from the beginning that the sender would abort, then he wouldn't do any homomorphic valuations and would still be able to persuade the sender. So what we do for our protocol is we follow the template of BS20 and we construct two simulators, one that's aborting and one that's non-aborting. And finally, a combined simulator guesses which one of these two should be used using Quartus's quantum-rewinding lemma. Now, this is basically a rewinding for the quantum setting. Of course, it isn't sufficient for an negligible sound necessarily and that's why it wouldn't have used it just like in the classical setting where we use rewinding, but for this case, it is sufficient. And that's how we extend to abort verifiers. Now, for non-explainable verifiers, the very standard way is to add a proof from the verifier to the prover that the messages are computed honestly. And if you notice, all of the verifiers' messages so far have been classical, so zero knowledge for MP is enough. But in order to still have four rounds, we need the three-round zero knowledge protocol. And in order to have statistical zero knowledge, we need the zero knowledge proof from the verifier to the receiver. And if we add the proof, then we also extend to non-explainable verifiers and then we're done. Now, with everything I've said so far, there are a couple of problems, a couple of obstacles. The first one is pretty obvious is that I said that we need the three-round post-quantum zero knowledge proof, which doesn't exist. I didn't mention in the beginning if you remember that there does exist a three-round zero knowledge proof, but it was not post-quantum, which means that it was not secure against quantum adversaries. And of course, we cannot use something like this in a quantum protocol. Another problem is that if you remember with the HOMO filter technique, we used SDS protocols. But this only provides computational security for the receiver and if we want statistical, as we've known, we want statistical security for the receiver. Now, both of these obstacles can be circumvented by constructing a three-round, sometimes extractable statistically-receiver-private oblivious transfer protocol. And that's exactly what we do. I'm not going to get into details about construction, but by constructing this oblivious transfer protocol, we get solutions for both of these problems. The only thing I want to say is that this is sometimes extractable protocol, which is kind of the same notion as the sometimes binding property of SBSS commitments. And of course, we use SBSS commitments to build this. And yeah, after building this, we can use it to get the two-round post-quantum CDS protocol, achieving statistical security for the receiver. But it's still not clear at all how we can get the three rounds zero knowledge proof. And the answer is that we don't. But what we do is settle for a weaker notion of zero knowledge that suffices for our protocol. And we define these as sometimes simulatable zero knowledge. Now, sometimes simulatable zero knowledge is, again, the same idea as the sometimes binding property of SBSS commitments, because simulation is possible with some very small probability. So the simulator with the straight line can run in polynomial time, but just have an exponentially small success probability. And just like SBSS commitment, there is an extra nice of laws that we have to account for. So when using these protocols, we have to set the security parameters of the other parameters accordingly. So we need quite a polynomial hardness of the WP, but we already assume that. So it's fine for us. Again, if you want more details about the constructions, you can get check out the full version of the paper. And that's pretty much it. To conclude, I'm going to sum up our results. So assuming quite a polynomial hardness of a WE and using SBSS commitments as a tool, we are able to, first of all, construct a two round statistical gridness in distributability argument for GMA. Then using that, we're able to construct a four round statistical zero knowledge argument for QMA. And finally, even though I didn't mention the constructions in this presentation, we get two rounds of zero knowledge arguments in the timing model, both computational and statistical concerning the assumptions. And we also do that without any trusted setup. I've also included the link to the full version of the paper if anyone's interested in taking the constructions in more details. And thank you very much for your attention. And that's pretty much it. Bye-bye.