 Alright, so our next talk is Constant Round Blind Classical Verification of Quantum Sampling. This is work by Kaiman Chang, Yi Li, Han Suan Lin, and Xiaoduo Wu, and Yi Li will give the talk. Test mic? Okay. It works. Amazing. Okay. Good morning, everyone. So let's get started. So, okay, so the first thing, oh, there's a pointer, that's amazing. Okay, cool. So, sorry, this is my first time getting in person talk. Anyways, let's actually get started. Okay. So, motivations. So let me talk a bit about what this work is about, what problem we are trying to solve. So today, let's say you want to run some quantum computations, but well, you don't have a quantum computer. So what people usually do these days is to just, you know, send your program, and you probably can do this. Yeah. To send your program and your input to a quantum cloud server somewhere, and then the cloud server will do the computation for you, and then the cloud server will send you the output back. If I let the pointer hurt. Oh, here. Yes. Sorry, technical difficulties. Okay. So as a crypto researcher, I mean, this seems good, right? Except as crypto researchers, of course, we see a couple of things wrong with this picture. So the first issue is, of course, that now you are suddenly sending all of your input data to a cloud server and who knows what a server is going to be doing with your data. So the first thing that you want to, the first issue is that we want this to be a private input. In other case, we want the blindness property where the server learns nothing. That is the first thing we want. But then there's also a second issue, which is what we call verifiability, which is that we do not know. We want to know if the server is doing this computation correctly, or if he's just doing some strange stuff and sending us some nonsense that is not correct. And, of course, we are not the first one who asked this question about verifiability. In fact, more than 15 years ago, it was asked by Gus Eastman. So kind of classical computer verifiable result of a quantum computation through interaction. So there's been, of course, a lot of research since then. And a lot of earlier works just solved this problem under some easier settings. So, for example, they might allow the client to have some quantum capabilities, or they might allow the use of many different servers where a client can ask each different server and check if the result is consistent. But then the big breakthrough came only four years ago by Mahadev. So this work, she answered this question in the setting assist. So only one fully classical client and only one quantum server. And the answer she gives is yes for decision problems. Okay. So now, of course, we want to ask, well, is that the end of the story? Well, let me remind you what decision problems are. So decision problems is saying that, okay, you have BQP language here, and you have instance X, and you want to find whether it's a yes instance or no instance. So in other words, this thing has a deterministic correct answer. So, of course, then the next question to ask is what if the, what is, what about randomized outputs? So, and of course, a lot of quantum algorithms that people are starting today actually do have randomized outputs. For example, we have quantum mechanical simulations and quantum supremacy experiments from a couple of years ago, you know, the random circuit and also a lot of quantum machine learning and optimization algorithms. All of this has randomized outputs in which case you cannot just, it's decision problems will not be a good model for these algorithms. So we propose to consider the classical verification of quantum sampling problems. So now let me actually talk a bit about our model. So classical verification of quantum sampling. So this is a model that is generalized from previous model. It's a fairly natural generalization. So what happens here is that we, this is a two-party protocol between the classical client, which is always on us, and the quantum server, which could be malicious. And the classical client has a quantum circuit C and is, and is corresponding input. And this quantum circuit C can be any arbitrary quantum circuit. Specifically, it can have randomized output. The only restriction is that the output of this circuit has to be classical. Otherwise the client is classical. He would not be able to understand quantum outputs. Okay. So, and then the next step is just how it always goes. The two-party run some protocol and exchange some classical messages. And at the end, the verifier chooses to accept or reject. But furthermore, the verifier, if a verifier accepts, then he also outputs Y, which is supposedly the output of the C of X, the recommendation. So here, let's actually talk a bit about security properties here. So security definitions. So for completeness, the first step is clear that we want the, we want it so if the prover is on us, then he always gets accepted or has high probability of getting accepted. But the, we actually, there's actually some, some technical details here because this, like I mentioned earlier, this circuit could have randomized outputs. So we are actually treating this, the computation of C of X as a distribution. So we are considering a distribution of possible outputs. And we want Y, we want the output of our protocol to follow the same distribution as if C of X is computed honestly. And for soundness, it is also a bit tricky, but it follows from the, from similar things. So for soundness, for all inverse polynomial errors, so here we do not have negligible errors. For reasons I will also explain a bit more later. But for all inverse polynomial errors, where this N is just the size of verifiers input. So for all inverse polynomial errors, condition on the fact that if the prover has any noticeable, has this noticeable chance of getting accepted, then condition on this verifier being accepted, the output is epsilon close to the actual, to the actual ideal target correct output distribution. So this epsilon close is, as I say, as distributions, and you can, you can either define, you can either consider this statistical distance or just computationally indistinguishable. Both definitions make sense, but in our work we only achieve computationally indistinguishable. Okay, so the, so that is our model and security definition. So next let me talk a bit about the challenges, of challenges involved when we try to construct a protocol under this model. So challenge of sampling problems. And this is, this question is especially worth asking because the, like, okay, so why is this thing more, why is this thing more difficult than decision problems? And this is especially worth asking because in the classical setting there is no difference. But in, but the issue here is that a typical trick that is used in the classical setting does not work in quantum setting. So the known classical, it just doesn't generalize very well. So let me actually show you what is this. Okay, so classically speaking, let's say you have a, what? That is unfortunate. Okay. Okay, so classically speaking, let's say you have a function F, which is just a classical circuit or classical function. Then of course we can just treat the, we can just write the randomness out explicitly, in which case the function F would become deterministic. Then you can treat every output bit as a decision problem. So you can just run the, run your decision problem protocol and you're, and it's done. But the issue is that this does not work under a quantum setting because, you know, quantum programs, they do not take an explicit random tape. But in the quantum program, your randomness comes from measuring quantum states. So it's more inherently random and there's no way to de-randomize it or at least it's unclear how to de-randomize a quantum program. So let's issue number one. And issue number two is a bit more technical but it's just a recurring problem and it's also a reason why we only achieved inverse polar error so I want to bring this up is the, the fact that amplification does not work. So let me show you what I mean by amplification. So if you have decision problems, then of course you have yes and no instances and this is also a standard definition that a yes instance has high probability of getting accepted and no instance, of course, low probability. But then it's, again, it's just a standard textbook definition that these two numbers are arbitrary because by repeating a protocol you can get that thing arbitrary close to one and this thing, this one-third arbitrary close to zero. But here's a question. What if you have a sampling problem in self-decision? So let's say if you have a sampling problem and you have a protocol with standard error one-third, it is unclear how would you run a protocol repeatedly in black box, as a black box and reduce that error to epsilon and it's just unclear how to do that. So these are the two main challenges. So now let me actually tell you our main contribution, our main theorem statement. So basically under the QLW assumption, I know that I'm not sure why I'm in this information theoretic section because it is clearly not information theoretic, but yes. So under the learning with error, under this assumption that learning with error problem is hard for quantum computers. We construct a classical verification of quantum sampling protocol that is blind, so the prover learns nothing and it's four message, so it's constant round and it is, we have negligible completeness errors and we have computationally soundness which as I mentioned earlier, it means the output of our protocol is computationally indistinguishable from the correct distribution. Okay, so before I go to the technical overviews, let me just also give just a bit of, introduce a bit of relevant literature to put our results in context. So this table is on classical verification of quantum computing. So here we restrict ourselves to the settings where we just have one classical client, fully classical, and we have just one quantum server. So we start by looking at the Mahadev's protocol. So Mahadev's protocol, it is constant round, it has constant error, constant soundness error, and it's for decision problems. And then after that, there are these two follow-up words for decision problems. So the first one is by Georgius and Vidic, they are able to achieve blindness. But then they also have the, they have negligible errors, but unfortunately, they take polynomially many rounds. So they are not constant round. And then after that, there are also these two words, these two concurrent words, which achieve constant round. Okay, so, and then after that, there is our work that I mentioned earlier. So we are the first one that talks about sampling problems for classical verification of quantum sampling. And unfortunately, we only achieve one over poly error. And we have for the constant round and blindness. And then after our work, there's this follow-up work that actually uses, actually builds on top of our constructions. So this work study, this work setting is actually, it's actually under the multi-party computation setting, but we can restrict their work to the, specialize their work to the two-party setting with one classical party and one quantum party, in which case we can compare it with our, with this setting. And if we do that, then we get this, where slow deterministic is a bit more general than decision problems, but it's less general than sampling problems. But on the other hand, the other blindness definition is stronger than ours is malicious blindness, which I will not get into right now. Okay, so now let me actually give a bit of technical overview. How, sorry, timer. Okay, I'm doing good. Okay, so let's give a bit of technical overview on how do we construct our protocol. Okay, so let's actually first look at Mahadev's because our work build uses constructions from her protocol. Okay, so the Mahadev's work in turn shows on this other protocol by, by the Modemire and Fissimons. So this protocol is also a two-party protocol similar to the setting we had before, but the difference is, as you may see, the verifier is quantum here. So you have quantum verifier and the quantum client. And this protocol is a single message protocol. So the prover just sends a quantum state, sends a bunch of qubits to the verifier, and the verifier would just measure each of them under either the X or Z basis. And then, and, and this is a protocol for BQP for decision problems. And what Mahadev does, Mahadev's main technical contributions can be seen as a protocol compiler going from this existing protocol to, to another protocol, which where the client is now fully classical. So the goal of this as the, as the name, as the measurement protocol may imply. So what this, what her construction allows you to do is that it allows the prover to keep the unqubit state. So instead of sending the state to the verifier, he just keeps it, and the verifier knows what basis does he want to measure the state. Yonda? Wow. It's English. I don't recognize. Okay, okay. Cool. So let's, how are we? Okay. So the, instead of the prover sending client the state, the prover is keeping it as an input, and the client also just knows what basis he wants to measure it in. And then the two parties exchange for message. So it's a four message protocol, it's just for classical messages. And at the end, the verifier could get a measurement result of the, of the state under his chosen basis. But there's a caveat here. Is that this protocol in this measurement, in this protocol, the verifier gets his outcome only half of the time. So let's, let me talk about just a bit more detail. So, so, yeah. So you, you know what I mean by the verifier gets output only half of the time. So in this third message, the verifier chooses between two possible challenges, T or H, T we call it testing, H we call it Hadamard, but I'm not going to go into the actual details on those naming conventions for now. But the, the idea here is that this tool is chosen from randomly by flipping, flipping a fair coin. And when the challenge is Hadamard, then everything works as expected. Let the, let the verifier will get both the flag. So either a stable reject and a measurement outcome and the guarantee basically everything works out. Let, if the flag is, is accept, then M is close to some, close to the measurement result of the, of the proven state. But then the, the, the issue here is testing wrong. So on testing wrong, the verifier still gets to either a stable reject, but he does not get a measurement outcome. And this is, of course, actually this is not the issue for BQP, because under a BQP setting, you, the verifier could accept the proven anyways. He would just, uh, suffer from one half sounder's loss, because now he gets light to have a time where let's say it's not, it's just some garbage state. Okay. And what about sampling is the question? Because while this is okay for BQP, for decision problems, suddenly for sampling is not good, because now we get, we don't even get measurement outcome half of the time. So, so this is the challenges. So let me talk about how do we, accomplish, how do we, our strategy to overcome these challenges? Uh, okay. So the first thing is, of course that we generalize this thing to handle sampling problems. This step is just by composing several known techniques. So there is, uh, so I just defer the details to our paper. I encourage you really, if, if you find interesting, but then the actual fun part is a part that is on the right. So, uh, of course the, uh, assume, okay. So the thing is that, so on how am I wrong? It would, everything works as expected. You get a measurement outcome, then just fit the measurement outcome back to this protocol and output, everything works out. But in the testing round, you don't get a measurement result. So the natural solution here, or at least one natural solution here, is to just run this protocol in parallel, run many copies of it, then maybe one copy would have a heart on my round and you'll be able to get your measurement results that way. And that is basically close to what we did. What we did is actually, uh, we just need a single heart on my round. Everything else is testing because, uh, because we only need the, we only need one measurement output, having more will not help. So this is actually a kind of choose protocol, where we just choose a random, we just run many copies of this protocol in parallel and choose a random copy to get the measurement outcome and we just do testing on, on all other ones. And that is basically, so the construction is also some, it's also quite natural, but the challenging part is to analyze this protocol. Because now if you are running many copies of a protocol at the same time, then the prover could choose some entangled strategies between those copies. So we follow this work that I also mentioned earlier. So we decompose the prover's internal state between the second and third message based on which, uh, based on which testing round will, where the prover gets rejected. Basically based on where the prover gets rejected and accepted. And the analysis, as you might see, gets, uh, gets somehow, gets quite involved. So I also defer to the, I defer the tank code details again to our paper. But another remark I want to make is that, uh, while this, while this work allows us to use it as a starting point to decompose the state, there are also two issues. Issue number one is that, well, the way that we do the construction is a bit different because we are choosing only one hardware round. So the composition has to be changed and also we need to make the analysis a bit more sophisticated because this work was on decision problems. So we only had to reason about decision, uh, only had to reason about acceptance properties. But now we need to reason about distribution being closed, being close to each others. So there's just quite a bit to, uh, quite a bit we need to add to this work. So lastly about blindness. So for, to achieve blindness, we actually construct a generic blindness protocol compiler. What I mean is that we can apply our compiler to the protocol from a previous slide to achieve blindness. Uh, but not only that, but this compiler works on any of the earlier works that I mentioned in the same line of literature. We can just apply our compiler to any of those protocols and also make them blind. And the high level idea is to just run the protocol under FHE, under four homomorphic encryption. But the, uh, and of course there are some issues, uh, some 10 code issues. If you try to do that, well to start with, you probably want to use one of these two, uh, schemes, either one by Barkersky or the one by Mahadev is a separate paper from the one from earlier. But you basically need the scheme to be compatible with the setting. And even so there will still be some 10 code loose ends that you need to tie up. So again for the 10 code details, I refer to our paper. So that is basically our, uh, 10 code overview. And lastly about the future directions. So there are two questions that we want to ask. The first question is of course the, uh, inverse poly error we had earlier. Can we make the error negligible? And we don't know. We have some, uh, some starting points, but we don't know if the starting points work. Specifically, we know that negligible error is achieved in related settings. So for example, in verifiable quantum for homomorphic encryptions, or in multi-party commutations, multi-party quantum commutations, but in those settings currently, uh, current constructions would all require the client or I guess the parties to have at least some with quantum capabilities to store qubits and so on. So that's the first question. And the second question is if we can construct a more general remote state preparation protocol. So what I mean here is that currently the remote state preparation protocol only allows you to prepare, maybe choose from a finite subset of maybe 10 different states. So we are wondering if we can generalize our result to also, uh, so right now we have a classical output that is received by the client, but maybe we could also allow quantum output that is received by the server. In that case, there will be remote state preparation, and maybe we can prepare arbitrary states instead of just choosing from a finite state of 10 states. And then that is all I have. Thank you very much for your, for your attention. Alright, thanks for the talk. We have time for one or two quick questions. Uh, as a reminder, please come to the microphone, so everyone online can hear. Thank you for a nice talk. Uh, just very quick question. You mentioned, uh, quantum LWE. Yes. Is there a difference between the quantum version of this assumption and quantum LWE? I think it's just the same thing. So Q of W is just the learning with error is hard for quantum, for quantum computers to solve. Right. So it's more like some quantum access to the problem instance itself. Uh, for LWE, the instance is just a matrix and a vector, right? So it's more like on those, on the input of that matrix and vector, you have a quantum algorithm trying to find the, you know, linear equation like AX equals to Y or something, plus error. Yeah. So I think this, I mean, the input is classical, so there's, there's no overcore or something like that. So I'm not sure why you mean by quantum access. Okay. Okay. Thank you. Yeah. Alright, so I had a quick question. Um, so you mentioned, uh, the difference between statistical error and computational indistinguishability. Um, I was curious if you could say a little bit more about what the barriers are. Like for example, if we removed the DNS requirement, can you get statistical or are there still barriers? Uh, unfortunately we don't know because, uh, okay. So Mahadev's work, uh, seeing the first one on verified BAT, that thing already based, is based on LWE. So basically the entire line of literature is based on LWE. And so the first part is that and the spinous compiler, which is, you know, a more standalone component, but that thing is homomorphic encryption also LWE based. So it's re-unclear. Alright, thanks. Alright, uh, let's thank the speaker.