 Hi everyone, thank you for coming to a talk, and I'm Nehui Cha from Quicks at the University of Maryland. Today I'm going to talk about classical verification of quantum computations with efficient verify. And this is a joint work with Kaiming Drone from academic cynica and Takashi Yamakawa from NTT Secular Platform Laboratories. In the near future, we believe that quantum computers are very likely to be unportable. We won't have a quantum iPhone, laptop, or PC soon. And this is because for quantum computers, we need a very large space to keep it. And we also need to have a very stable environment to make sure that the qubits are uncoherent. So we cannot have it to be with us everywhere. Now, the question is that, suppose I have a local device which has no quantum power, then how do we do a quantum task by using our local device? For example, if I am given a quantum circuit C in a classical way, like you have the classical description of how to arrange the gates and input x, how do you compute the C of x from your local device? And the answer is very easy. You just ask quantum computers for help. Like IBM have a quantum cloud platform, and you can design your circuit C and then your input x, and then you send your C and x to the platform. And then the IBM quantum computer answers for you, like the answer could be Z equal to C of x. So this seems very good, and we don't need a quantum resource in our iPhone or laptop. However, how can you verify that the answer Z is equal to C of x? For example, if the server is not on the server, it's some cheating server, it just gives you some arbitrary answers to your question. How can you verify that the answer you get is correct, not the arbitrary wrong answer? So this is a big question, and people try to address this question. So the question, I'd refer to the question again, can the client verify that Z equal to C of x? And to answer this question, we also want it to satisfy three requirements. First, we want the verify has no quantum power, because as I said, in the near term, our local device cannot have quantum power. And second, we also want the prover to be efficient. It means that it doesn't need to spend much more time than computing C of x. Also, we want the verify to be efficient. So then the first approach is that suppose I just want to compute some circuit C, which is in BQP, then because BQP is in P space equal to IP interactive proof. So there is an IP protocol to help you verify the answer of C of x. Okay? This sounds good. But now that in the IP protocol, the prover in the IP protocol is an all-powerful prover. So the prover in this protocol may not be efficient. So this doesn't satisfy the second requirement we want, that we want the prover to be efficient. But in the IP protocol, the prover may not. So there are many researchers trying to answer this question. The first attempt in this kind of research, a hard one of at all, they show that if your local device has certain quantum resources, which may not be an all-powerful quantum computer, but it just be some ability to generate quantum states or operate in some quantum gates, then you can verify the answers with a efficient verify. And another research is considered the mutiprover case. This is by Rekha Edo and a bunch of following works. They show that if you can have two provers and this two provers shares on EPR pairs and they cannot communicate with each other, then yes, you can have an interactive proof with these two provers. Okay. Oh, but I want to mention that, however, all of these approaches either need some quantum resources in the local device or you need more than one prover and these provers cannot be cannot shared by communication. And in 2018 by Mahathirah seminar work about the BQC protocol, she showed that there is a full message in the BQC protocol for decision problem. It means that we just try to prove access in a BQP language or not based on quantum LW assumption. So quantum LW assumption simply means that the learning with error problem is hard for quantum computation. And she showed that the sonnet error of her protocol is just one half plus and diggable and the company's error is just not diggable. And the wrong time of verify and the prover is only polynomial in T for T, the time for computing language F. Okay. So this looks very good because the verify doesn't need quantum power check and the prover is efficient. It only runs in time polynomial in T. However, the verify here is not that efficient because the verify need to run in time polynomial in T and like in the classical safety when we do the delegation, we always want the verify only running in time for the logarithmic in T. It means that, okay, the verify itself doesn't need to do much computation. We just need to like do very, very small computation to compute what he wants. So in this world, we try to show that a CBQC protocol with an efficient verify. And to achieve this, we prove that first, the perturbation of my hat depth protocol was not diggable under this arrow. And then we show that there's a two wrong CBQC protocol in the quantum oracle model obtained by Fiat-Shammier transform to a perturbation version of the my hat depth protocol. And here the quantum Fiat-Shammier transform is by Bill and Jim and Donald in 2019. And finally, we have a two wrong CBQC protocol with polylog and verify in the common reverence stream model. So let's do it again. We start from the four mesh protocol with constant sonnets, and this is my hat depth protocol. And then we do a parallel repetition, which gives us a four message protocol with negligible sonnets. And then we do a quantum Fiat-Shammier transform, which gives us a two mesh protocol with negligible sonnets. And finally, we consider a common reference stream model and get a protocol with negligible sonnets and polylog and verify. And because of a time constraint, we cannot go to all the details. So in this talk, I will focus on the first part, which is a parallel repetition of my hat depth protocol with negligible sonnets. Okay. And this has been independently proved by allergic Edo in 2020 with a different proof. And their work has also been accepted by TCC 2020. So if you are interested in their work, you can go to check their video. They do have some extensive results. They do have some results, is 10 different parallel repetition, which is different from our scope. Okay. So before we go to the technical details, we first briefly review my hat depth CBQC protocol. So in her protocol, the client first generate a secret key and public key, and then send a public key to the prover. Okay. Here is a public key. And then after receiving the public key, the prover generate a commitment Y in our internal state, psi. And then he sends Y to the verifier. And after receiving Y, the client generate a bit, which is C is 0 or 1. For 0, it means that you will do a test run protocol. So here we don't, we don't, I will not specify it's test run, but you just remember that for 0, it do some, do a challenge. And for 1, it do another challenge. Okay. And then after receiving the challenge, challenge bit, the prover just send back a message A. And then the client just decided to establish a check based on all the measures he got. Okay. And there is a lemma in precedence in my hat depth paper, which is that for any X in L, if a quantum polynomial time shooting prover passes the test run with probability 1 minus the ligable, let me say, then it passes a Hadamard run with probability 1 minus the ligable, assuming the quantum harness of W assumption. So this implies that we can actually divide the prover space into S0 and S1 for psi. Okay. It means that after, after a prover sending a second message, then we can divide the his internal subspace, internal space into two subspace, S0, S1, such that S0 is that, is all the state which establishes high probability in the test run. Okay. And by this, by this lemma, it implies that S will also be rejected with high probability in the Hadamard run. And therefore S1, S1 is all the states that rejects with high probability in the test run. Okay, then it seems that, okay, now you are given a state S in SB. S will be either rejected in Hadamard run or test run with high probability. Right. This, this is simply, this simply follow from the previous statement because here, if S is in S0, then it will be with high probability in Hadamard run. If S is in S1, it will be with high probability in the test run. Okay. Then by this lemma, we can have this decomposition. And then it is trivial that if we just choose C from 0, 1 uniformly randomly, then we establish probability one half plus negative N. Now we can try to parallel repetition, try to do parallel repetition of Mahadev's protocol by just dividing a prover space. So the theory we want is that, suppose we do an M for the repetition of Mahadev's CVQC protocol, then we can reduce the suddenness error to 2 to the minus M plus negative N. Okay. And this just, and the problem here is just that the parents first generated M difference, secret key and public key and send all the M public keys to the prover parallel. And then prover will send all the commitments in parallel and then the clients send all the changes in parallel there and so on. Okay. And we can still divide the space into S0, S1 for different iteration. Right. For example, for the iteration 1, we can divide in S0, 1 and S1, 1 for iteration 2, we divide in S0, 2 and S1, 2. And then we can also say, okay, for psi, which will be, if psi is in some SBI, then it will be either rejected in the test room or rejected in the Hadamard room. And now we just consider ideal case, which is a very simple case. It is that the prover stretch for IS iteration only depends on psi. Okay. So it only depends on its internal state psi and it doesn't depend on the message he got like the C1 to Cm. Okay. Oh, this is not, this ideal case of course is not real, but we just, for simplicity, we just consider it, consider this case. Okay. Then in this case, we actually, we can divide this psi into psi 1 to psi M plus psi M perp. How do we do that? For psi, we can actually, we can divide it into psi 1 perp and psi 1, which are corresponding to S, to state in S0, 1 and the state in S1, 1. Okay. And based on which, which challenge you are choosing this time. And then for S1, for psi 1 perp, you can divide, you can keep divided into psi 2 perp and the psi 2 using the S0, 2 and the S1, 2. You keep doing this and you can get psi M and the psi M perp. And so here, psi perp means that the state that passes challenges C1 to Ci. Okay. And for psi i, it means that the state that passes challenges C1 to Ci minus 1, but fails at Ci. Okay. And of course, it is, it is trivial that it's patty, that it's patty known of psi M perp is at most 2 to the minus M. Oh, here, psi M perp is, is a state that passes all the tests C1 to Ci. Okay. And then we are done. We come up with the parallel interpretation theorem in this ideal case. Right. We just use S0 and the S1 and keep dividing the state into different subspaces. However, the ideal case is not a real case because in a real case, p-strategy can depend on C minus i, which is S1 to CMS1 and the Ci plus 1 to Cm. And which fails the analysis. Right. Because here, when you want to compute the A1, when you want, when they prove you want to compute AI, it can depend on the previous and the, and the, it can depend on all the, all the messages you got in the third message. So our, our second idea is that, okay, if we cannot, if we cannot get rid of C minus i, then we'll just include that into our calculation. So we just do an average over all C minus i. How do we do that? We still, we still define SBI on the proof as the internal state, average over C minus i. So we, we define SDI to be Vspsc in test one with probability gamma. Here, gamma is just some one over polypropability. And the S1i is all the states that we regress in test one with probability smaller than, smaller or equal to gamma. And then for, for SDI, for staying SDI, we can actually amplify probability to one minus negligible by mp2 amplification. And this technique is by Merrill and Wartros in 2005, and by Nargaj Edel in 2009. Okay. And then because we, we have amplified the probability to one minus negligible, then by the, by my head of Slema, we have that Vspsc in test one with high probability implies Vj Hadamaran with high probability. So it implies that S c in S0i, it will be accepted in Hadamaran with negligible probability for OC. Okay. So yeah, then we got very good property for the first part. And for a second part, for S1i, for any fixed C minus i, the probability of acceptance is at the most two to the m minus one times gamma for OC. And you may think that this may not be good enough because gamma is one over poly, and the m it could be log n. So it just, it just, it may be just a one over poly probability. However, this is good enough for our analysis because we just want to show that for all one over poly probability, if we choose gamma to be small enough, then we can pass a test, the prover cannot pass a test with that one over poly probability. And this looks good. It seems that we are almost there, but there's one more thing we need to prove is that we actually need a projector of S0i and S1 to be efficient. How do we need this? It is because in the previous analysis, we kept using this Leymar, right? And this Leymar actually requires the prover to be quantum, to be a quantum polynomial and actually improve it. It means that all the states we consider in our analysis need to be efficiently generated to have this group property that we are saying test rows are probably impressed very generally in Hadamard row with high probability. Okay. So we need a state to be efficient, then we need a projector to be efficient. So how to implement S0i and S1i? Actually, we can observe that the same probability P corresponds to eigenvalue E i theta of certain efficient operator. Okay. And the idea is that, okay, we just do the fast estimation. We compute P by fast estimation and then we compare it with gamma and then we uncompute P. Then we can implement the projectors, right? We can approximately implement the projectors. And of course, as I mentioned, have some one-over-poly error, which may fail our analysis. So our final tweak is that we use a random threshold in gamma. So in this step, we pick lots of threshold like one, like T different thresholds, and then we pick gamma from there randomly. Then with that technique, we can, yeah, we can bypass the error. Okay. Now, this time for summary. So we give a CVQC protocol with an efficient verify which quantize probability in M. And this is by a parallel repetition of the Hadamard protocol with negligible certainness error. And we have a two-round CVQC protocol in the quantum random oracle model. And finally, we have a two-round CVQC protocol with a probability of verify in the common reference string and the quantum random oracle model. And then we achieve a efficient verify by using all of the sequentials. Okay. And there are some open questions. First, our CVQC protocol is based on lots of assumptions. In addition to LWE, we also need the quantum random oracle model and the common reference string model. So can we make the assumptions weaker? And second, actually by Tron et al. they have a protocol for verification for quantum sampling problem. And actually their protocol, I think, is not efficient. So can we make the protocol efficient? It's another interesting problem. Also, can we improve the efficiency of our protocol? Yeah. And thank you for listening.