 Okay, I'll be discussing this work called secure quantum computation with classical communication. And this is a work about the notion of secure multi party computation, which studies the setting where we have multiple parties, each with a private input Xi that wish to compute some public circuit see over their private inputs by communicating with each other and all learning the output why. Okay, and so we don't want these private inputs to be leaked to other parties and so for security more formally we say that. Any adversary that corrupts any subset of these parties so, for example, party two and party three that they won't learn anything about the honest party inputs X one or X four except what they already could have learned from just the output of the circuit. Okay, and this is, you know, very well studied notion it goes back, the study of MPC goes back to kind of near the beginning of modern cryptography. And, you know, another notion that's been studied at least now for a couple of decades is the generalization of MPC to this the setting of multi party quantum computation, which is where every party can be quantum. And maybe they each have quantum inputs and they want to compute a quantum functionality over their inputs. Okay, and you can define security in the same way. Okay, but kind of the starting point for this work is noticing that so far, and PC has really been studied in one of these two settings either like every party is classical, or every party is quantum. Okay. But I mean, you know, quantum technologies, it's seemingly quite difficult to to construct so for the time being. There's going to be the case that there's only going to be a few quantum computers out there in the world. Right. So pretty natural question to ask as well. Can we, you know, achieve MPC or can we do MPC can we achieve any notion of it. When we have some mixture of classical and quantum parties. Right. So can we do some like secure, can we can we do MPC for quantum functionalities where not every quantum or not every party has to be quantum. So, for example, you could consider the this following notion where we maybe have one quantum server, the party one that's interacting with a bunch of classical clients, and they all want to come together to compete some quantum functionality over their private inputs. Okay. And so this is the type of notion that I like to study in this work. So this again defines security the same way and adversary the crops any subset of parties like including like collisions between clients and server that they won't learn anything about the honest party. Okay. Right, so this is what this work studies and the starting point is a supplier work on kind of protocols that can be, you know, that can be interpreted as basically like single client single server to party protocols. So in this setting we have like one one client with the quantum circuit queue classical but X so this client is classical, we have a quantum server. And, you know, they interact back and forth and at the end of this interaction the client the client outputs queue of X. Okay, and in particular, I want to cover, you know, mention two protocols that were constructed a couple of years ago for this setting. So Mahadev in 2018 gave a protocol for quantum fully homomorphic encryption. Okay, which basically achieves the cryptographic notion of blindness in the setting, which means that a malicious server is not going to be able to learn the client's input X, while interacting in this protocol. Okay. So the cryptographic notion that you could, or another security notion that you could hope for is kind of more of a correct this notion, which means that the malicious server cannot cause the client to output a false outcome. And indeed Mahadev in 2018 constructed this protocol called classical verification of quantum computation that achieves exactly this so I'll call the soundness where malicious server cannot cause the client to output a wrong output why not equal to the queue of X. You know, so prior to this work there exists these two, these two protocols to different protocols in this kind of like single client single server setting. Okay. But if we want to achieve like the full, full notion of secure computation or MPC. It really requires a protocol that achieves both blindness and soundness simultaneously parties need to hide their inputs from other parties and they need to be insured of the correctness of the output. Okay, so we need kind of one protocol that combines both of these. Okay, so, you know, let's try to do that. Actually, so let's look at like this, let's look more closely at first this this blind like you FHE protocol right and so syntactically what happens is, it's only a two message protocol. You know the client will just kind of choose a public key will encrypt their input X under the public key send it to the server, the server will evaluate the functionality under the encryption and then return the evaluated ciphertext back to the client which the client can then decrypt. Okay, so this is the syntactically what QFHE is syntactically what CVQC is the sound notion. Here I'll give kind of an example of like what a two message protocol could look like but in general it could be more than two messages. Here we have a client that, you know, construct some public parameters based on the circuit and their input and we have no notion of hiding for these public parameters they could include like in the clear descriptions of QNX. Right. But what the server can do is run some like prove algorithm that outputs the evaluated circuit Q of X, along with some proof of correctness and so now the very the client can go and verify that this proof is correct. If it is then they will happily output Q of X and otherwise they're going to abort. So this is, this is the syntactically what the CBQC protocol looks like. Recall that basically as a first step we just want to combine both protocols we want to get a notion of both blindness and soundness at the same time. Okay, so there's a really natural way to try to do this, like let's simply run the CBQC protocol under the hood of a quantum FHE. So I'll call this like a blind CBQC protocol. And again, you know what does this look like the client just chooses some public parameters encrypt it under the FHE public key. And now this the server can evaluate the proving algorithm under FHE give it back to the client. The client then just does two steps right first decrypts the ciphertext to get to the Q of X along with the proof then it verifies the proof. So this seems like a pretty reasonable protocol that would that achieves both blindness and soundness at the same time. Right. Okay. So let's try to use this protocol to, you know, construct what I, you know, when I construct one, what I mentioned that I wanted to construct at the beginning of this talk right, which is just a, you know, a notion of NPC between multiple classical clients and one quantum server. Okay. So we have three classical clients we have one quantum server. A pretty natural idea now is to say, well, let's use classical NPC to emulate a single client interacting in this like clients, this blind CBQC protocol that I just mentioned. Okay. So they're going to basically use classical NPC to become a single client that has, you know, as input, the concatenation of the three private inputs of these parties. Because it's look like then they can interact with the server right they can encrypt these inputs in the blind CBQC protocol the server can run the proof algorithm under QFHE and then they can use classical NPC to kind of, you know, decrypt and verify the output. And then all these clients can either output like, you know, the correct output or an abort. Okay. So, right. So this is, I don't know that like, are we done. This is like, this is the first attempt basically it's saying like, well, let's just put blind CB, let's just put QFHE CBQC together let's put classical NPC on top of it does this give us actually what we want, like a protocol for secure NPC. And so it turns out that it's not the case there's actually attack on this protocol. Okay. And to see this, let's say that like all but one client are corrupted. Okay, so the server is corrupted and they're colluding with, you know, some some client. Okay. So, since the server is corrupted. What they can do is they can, you know, they can produce a false proof here. And they can do this under the FHE. Okay, so what is the server going to do like under this FHE they're going to compete a valid proof like how an honest prover would would compute it, depending on like whether, like, depending on the input x3. So this is the honest part is input x3, and the server can kind of operate on this input under FHE right so let's say they can compute a valid proof of x3 is equal to zero and an invalid proof x3 is equal to one. But what does this mean that means that when the clients like kind of decrypt the output they'll see the correct output q of x1 x2 x3 if x3 is equal to zero. And on the other hand they'll see a bot or an abort if x3 is equal to one. And, you know, recall that like these clients can see this output. And so, you know, this means that this adversary like this adversarial clients can completely learn any bit of x3 just based on the output of this protocol. Okay. You know, so this is kind of an attack that completely breaks the security of the protocol any more generally like kind of any predicate on any honest parties input can be can be learned using you know this is basically what's called a selective abort attack. Okay. All right. So the takeaway is that really if you just just try to match together standard blindness and soundness it's not going to be sufficient for for building achieving like the full notion of of NBC. Okay. In particular, what it really boils down to is that we need this notion of blindness to hold even if the server can observe whether the client accepted its proof. Okay. And, you know, note that like what we just the problem with what we just saw is that like the adversary or like the prover is as soon as like we allow them to see like what the honest client to honest clients output. As soon as we allow that then they can kind of use that information to break privacy. Okay, so we need to explicitly prevent this from happening. And this sort of thing can be can be captured via, you know, what I via the notion of what I call a composable blind CV QC. And this is like, basically a blind CV QC protocol that satisfies a particular ideal functionality. Okay. So the functionality is going to be parameterized by the public circuits Q is going to take private input from the client, and just a bit B from the server indicating whether the server is being honest or malicious. Okay, the server is being honest that'll just output Q of x, honestly back to the client. If the server is being malicious that'll just abort. Okay. And so, so what does it mean to emulate this like, or just satisfies this ideal functionality that means if we have a protocol between a client and a server kind of like a real world protocol here, where the client, you know, has some output why the real server has some arbitrary output quantum state psi, then, you know, this output is going to be indistinguishable from a execution in this idealized world, right where the client just all they do is they submit their input x to this ideal functionality. There's a simulator that just inputs to ideal functionality whether this whether the server is being honest and malicious, and then the client's output is determined by the ideal functionality. Okay. And so we want to say these are, you know, these are indistinguishable and this really captures this, this notion of the stronger notion of blindness. Because note that like basically whether the sim whether the server is being honest malicious or malicious like this bit cannot depend at all on the client's input x. Okay, so they cannot determine whether or not the client aborts based on the client's input and that's kind of like captured within this this real ideal world paradigm for implementing this ideal functionality. So, you know, the, I guess the, the goal of the rest of this talk is to basically discuss how to construct such a composable blind CBQC. Before I do that I'll just give a little bit of, you know, I'll note that actually this is this notion has been studied before. So, in particular, this work of 2014 showed that there are a couple of protocols of more mind FK 17 that are composable blind and sound delegation protocols in the setting where the verifier is quantum but performs limited quantum operations. Okay, and these are protocols that achieve information theoretic security. Okay, so these are like, basically protocols between a limited quantum verifier and a fully quantum server. Okay, and you want to achieve both like compose both blindness and soundness and in a composable way. Okay, so this is this notion has been studied before. And then more recently so GV and 19 actually what they they did construct a composable blind classical verification of quantum computation protocol, which is exactly the notion that I'm interested in, assuming quantum hardness of learning with errors, except that they do have like their, their security is kind of is non standard in the sense that they don't achieve and they don't achieve like standard negligible security. In particular, like the difference between real and ideal world is going to be some inverse, like you could distinguish them with some in response to the probability and where that polynomial grows with the communication complexity of the protocol. Okay, and more of where their, their protocol is highly interactive and takes finally many rounds. Okay. So that's kind of the prior work. And then I can really say what the main results of this work are. And the main technical contribution is this, our new constructions of this composable blind CBQC primitive. And in particular we do, we obtain like a four message protocol that's like fully you know negligibly secure from the quantum hardness of learning with errors. And we can also compress it to two messages, and even kind of distribute the setup among multiple parties. QLWB, but also operating in the quantum environmental model. Okay. And so this is again this is the main technical contribution the main applications is to show then how to use like these composable blind CBQC protocols to construct NPC with between quantum and classical parties. So I mentioned at the end of the talk, right and so our protocols are going to be, you know, we're starting the notion of like malicious security. Although we're going to allow for, you know, execution in the CRS model where you can have a common random string setup. And the functionalities that our protocols are going to support is what I call like pseudo deterministic quantum functionalities. Okay, so these are quantum functionalities that have classical inputs and outputs. And they're like pseudo deterministic in the sense that for each, for each input. There's going to be one, one output that's output by Q except with negative probability. Okay. And so, like, those various settings that that we study so one setting is where we have multiple classical clients and one quantum server. And that, you know, they all want to interact and then all the clients want to output the result of the circuit and we give a six round protocol from PLW. And we can also give like a three round protocol basically like one round from clients to server one round from server to clients and then one round for a kind of a joint decryption in the in the key room. And then we also study the notion of just this two party setting with like one classical client one quantum server and we give basically what's called a NISC or a non attractive secure computation protocol for the setting. Again from LW in the key room so that just means basically a two message protocol so kind of around optimal two message protocol for the setting. Okay. Right. And so they're for the remainder of the talk. I'm just going to be focusing on this composable blind CBQC protocol. And then, yeah, I would say if you want to see exactly how like this protocol is used to build these NPC applications I'll encourage you to see the paper for that. Okay. And really what you know the starting point of this is is to notice that the issue with this previous approach that led to this kind of abort attack was the fact that the AC protocol, the verifier when they received the previous message, what they first did was decrypt, and then they verified, which means that like if you if you want to like fully verify the proof. And be sure that the proof was being honest you actually need the decryption key to do that because you first decrypt and then verify. Right. And this is, you know what was preventing us from arguing that like this protocol satisfies blindness in the stronger sense. So a paradigm that's been useful for getting around this problem like in the classical setting that's been used before is, you know, can we switch can we first verify and then decrypt. Right. So in this way, you wouldn't need the decryption key for verification and so like even if the prover can see whether the proof verified or not they might not be able to break blindness because the decryption key was not necessary for for showing verification. Right. So, this is the goal basically kind of verify the decrypt. In our setting that means can we actually use the CBQC protocol to first prove a correct evaluation of QFHE, and then only after that decrypt the QFHE. Okay. So unfortunately it's not actually this straightforward. And the reason is that this protocol of Mahadev, the CBQC protocol that we're using actually only supports pseudo deterministic circuits Q. Okay, and I kind of explained informally what that was on the last slide but more formally. Q is pseudo deterministic if for for all classical inputs x, there exists a classical output y such that like Q of x equals y except with a negative probability. Okay. So we know how to do like you know Mahadev showed how to do CBQC for such such functionalities. The issue is that if like the QFHE evaluation the thing that we actually want to apply this protocol to is not so deterministic. Okay. And the reason is that. Well, like what happens is that like if you take a ciphertext scripting x, and you evaluate it, evaluate Q on that ciphertext what you obtain as a distribution over output ciphertext encrypting Q of x so even if Q is deterministic. This ciphertext is actually a distribution over the random points are. Okay. So, you know, this is this is why that doesn't like work. You can just like kind of switch switch these and just like plug in CBQC to prove the correct evaluation of QFHE. So it doesn't work syntactically. But we will still try to follow this verify the encrypt paradigm. And instead of directly using you know Mahadev CBQC protocol, what we're going to do is we're going to use a, a more recent protocol, due to trung at all CLLW that can actually handle in some sense. It can handle like verification for what's what are called quantum sampling circuits. Okay, so this is a high level idea. Okay. And again, this is exactly our goal is to construct a blind CBQC protocol following this verify the encrypt paradigm. Okay. And before I actually explain like the steps for doing that let's just, let's just like dive a little bit farther into just like Mahadev CBQC protocol. Okay. So this is really what her protocol is is a combination of of two sub protocols. So one is this FHM delegation protocol, and one is what's called the measurement protocol which is, which actually makes up the bulk of Mahadev's work. Okay. This, this underlying delegation protocol it's being used considers basically a quantum prever and a quantum verifier. And it supports pseudo deterministic circuits. This is actually where the rich just the restriction to pseudo deterministic circuits is coming from. You know, soundness is actually information theoretic there's no cryptographic assumptions. The drawback is that the verifier is quantum. Okay, and so what happens is that it's just a one message protocol the prever sends over a quantum state, the verifier basically just immediately measures this state in a series sequence of bases. So either standard, either standard Hadamard bases. And so this notation is basically meaning that like for an end bit for an end qubit state psi and an end bit string H. We measure each qubit of psi in either the standard of the Hadamard basis depending on whether the bit H of i is equal to zero or one. Okay, so basically the verifier just takes this, the state immediately measures everything to get a string why hat, and then apply some classical post processing on that to obtain the output why. Okay. And so this is kind of the delegation protocol that we're starting with. What Ramella showed how to do is how to take such a protocol and basically outsource this measurement part of it to the prover in the end turning the verifier completely classical. Right. So, you know, the measurement protocol, what it does is it allows a classical verifier to delegate standard and Hadamard basis measurements to a quantum prover. So, you know, now the verifier has as input it's, it's measurement basis H the prover has as input a quantum state psi, and at the end of the protocol the verifier outputs the result of measuring psi in H. Okay. And so let's, you know, let's come and review the issue with this the final protocol is that this underlying FHM delegation actually only supports pseudo deterministic circuits. Okay. Now, I did mention there's kind of a newer work of CLLW. What they do is they, what one of the things that that they do is they construct a underlying protocol here. That's that has a couple different properties. It actually supports sampling circuits Q, which are, you know, quantum circuits the output some arbitrary distribution over classical outputs. Okay. So that's good because like that in particular captures QFHE evaluation. But unfortunately, the soundness, it's still information theoretic but it is relaxed to an inverse polynomial soundness it's not standard negligible soundness anymore. Okay. So, again, this is, you know, the idea is still to basically take their protocol and combine it with a measurement protocol but we're going to have to be careful and try to. And our eventual goals to get the standard notion of delegation and MPC was my rules on this. Okay. So, this construction basically proceeds in three steps. Okay, so we start with the CLLW protocol. Okay, and we note that. We only has inverse polynomial soundness error. But if we apply it to the special case of verifying a QFHE evaluation of an underlying pseudodeterministic circuit, then actually we can obtain a negatively sound protocol by basically parallel repeating it. Okay. And of course the resulting protocol still has a quantum verifier. But our second step is then to combine this with, you know, Mahadev's measurement protocol to obtain a protocol with the classical verifier. And this only gives like a one half sound protocol because of the structure of this measurement protocol, we have to parallel repeat that again to finally obtain a negative, like finally what we want which is a negatively sound protocol with classical verifier. Okay. Right, so I'll go like through each of these steps in a little bit more detail. So first, what we want is a, you know, a protocol of negligible soundness and a quantum verifier. You know what we're going to do is we're going to have the verifier, you know, take their input x first encrypted under QFHE and send it over. And here's where the CLLW comes in, or we're going to have the prover do is, you know, basically run the CLLW prover. You know, to prove, you know, to evaluate the cipher text to basically an encryption of Q of x and send over a quantum state basically proving that this evaluation was done correctly. And so the verifier can run the CLLW verifier to obtain a correctly distributed cipher text. Right. You know, up to some error. The CLLW soundness error, we can set to any parameter but it has to be inverse polynomial. So let's just say we set it to one fourth. You know what this guarantees is that the cipher text output by the verifier is distributed. Kind of say let's say within one fourth statistical distance of what it should be, which is an encryption of Q of x. So what that means is that when the verifier then goes and decrypts that cipher text, or a guarantee that they decrypt it to the correct value with probability at least three quarters. Okay. So this is where the parallel repetition comes in right, we have like a three quarters sound protocol. All we can do is just parallel repeat the CLLW protocol lambda times. Now the verifier obtains lambda different cipher text they do crypt all lambda cipher text to get a bunch of like values of why I, and a natural thing to do is just to have them, you know, take the majority out of the why as they do crypt. So just like whichever is the most frequently occurring why in the set they say that that's the output. So you can show that basically, essentially by turn off. It follows that like the problem is that this output wise equal to the correct answer is, is overwhelming. And I say turn off star because you need to like a little bit like not everything is independent, like, because like for example the prover could be entangling the states, but kind of like basically very similar argument to just like standard QMA verification shows that you can actually get this turn off, like argument to work. Okay. And so importantly what we've constructed. It has a couple important properties first the verifier like first verifies right, and only once it's convinced that like you know all these, you know, all the CLLW sub protocols except then they just decrypt. Okay. So there's no mixture of these steps, and in the end they obtain obtain a negatively sound protocol. Okay. So, if you recall the next step is to basically make this verifier classical. So let's look at this verify procedure. Recall that, you know, due to the structure of the CLLW protocol, all this is is all the verifier does is they first like immediately measure all the prover states and some bases age. So if I apply classical post processing circuits the results to produce this hypertext. So this is like the part where, you know, we're going to outsource to the prover using Mahadev's measurement protocol. And so tactically what this looks like then is that like Mahadev's protocol is basically this for message like commit challenge response type of protocol. Okay. So here of course, the verifier sends the encryption of X like before, but then they engage with the pro the prover in this kind of four message protocol, which at the end of which the verify either like if their challenge was zero either just accepts a rejects, or if their challenge was one actually goes and obtains like an output ciphertext, which they can then pass the decrypt procedure. Okay. So I'm not really going to go into more details about how this measurement protocol works. But this is kind of syntactically what you get. Okay. And because this challenge is only one bit long, you actually write it like right now we actually only end up with a one half sound this protocol. Okay. So the final step is to boost this one half sound protocol to negatively sound protocol. And again we're going to do this with parallel repetition. Okay. And again, naturally, you know, what we're going to do is we're just going to repeat this measurement protocol and parallel lambda times. Okay. And so, you know what happens when we repeat it lambda times how do we define the verifier. Well, so you look at all these challenges, you know some of them are zero. If any of the sub protocols were set. If any of the sub protocols reject on a zero challenge we just the verifier just rejects. You know, otherwise the verifier collects all the outputs on like the one sub protocols passes them to the decrypt procedure and still take a majority of those. Okay. So again we still maintain this this crucial structure that there's first this verification procedure which like doesn't use the FHC secret key at all. It just like basically uses the underlying like measurement protocol right. It just basically either rejects or accepts and passes along the ciphertext. And at this point, the verifier no longer has an option to abort to just simply decrypts and then outputs the most frequently occurring why and it's set of decryptions. Okay. All right. So, basically remains to show that this parallel repeated protocol is indeed negligibly sound. Okay. And there is some prior work on showing that the parallel repetition of these sorts of protocols the sound. Okay, and one such work is, is this work of a CGH, and they gave basically the following parallel repetition, parallel repetition theorem. They said, Okay, let's say that you're under your single repetition protocol like over here satisfies the following property. So the property that for any, you know, malicious but efficient prover p star. If it's the case that they are almost always accepted on like a zero round. So if, you know, if they're almost always accepted when the challenge is zero, then the probability that they can cause a false outcome. When the challenge is equal to one is negligible. Okay, so it's kind of like if they're like passing one of these challenges then they're like. If they're passing like the zero challenge with overwhelming probability, then they only have a negative chance of cheating. So they're saying, Okay, let's assume your underlying protocol satisfies this property. Then, in your parallel, your parallel repeated protocol satisfies this property, namely that, you know, again for any efficient, but malicious prover. It holds that the probability that the previous simultaneously accepted on all zero challenges, and can cheat on all of the one challenges is negligible. Okay, so they show that this is this is going to hold. Okay. And so this is actually good enough for like just basically parallel repeating like playing CBQC. But it turns out this is actually not good enough for our setting. And so why is that note that in this parallel repeated protocol, if the previous sheets on half of these one instances. Then I can make the verifier accepted for a false outcome right all it has to do is like change more than half of these y is to be some other Zi, and then the verifier will output like a wrong, wrong Z. Right. So actually what we really want to say is that we want the stronger property that, you know, for any efficient P star it holds that they're simultaneously accepted on all zero rounds, and can cheat on, like, even half of the one rounds, still with negligible probability So this is kind of a stronger parallel repetition theorem that we want, which is what we show in this work. Okay, and this is then enough to show the soundness of this, this protocol. So I can give like a slightly closer look at this parallel repetition theorem. So, you know, abstractly what we're looking at is the following situation it's kind of like a quantum prover signal protocol with classical communication. Okay, so we have a commit phase, like a one bit challenge and then a response and this prover, you know, we're, this is all classical communication but we're allowing like a malicious prover to be quantum. And we can set up some notation so basically the state of the prover after this commit phase will call size of P. And then we can associate with this verifier to projections that would be applied to size of P, which is basically like pi zero is the projection that projection onto the space. Where the verifier accepts on the challenge is zero pi one is a projection onto this onto the space for the verifier accepts on the challenges one. So, what ACGH did was they basically, you know, a rephrasing of kind of the property of the single repetition protocol that I had in the last slide is the following. And it's called what they call computational orthogonal projector property. It's if for any efficient P, the, you know, the expectation of this expression is negligible. So basically that like this previous state cannot essentially satisfy both of these projections, pi one and pi zero. Okay. And so this is in particular this this property implies that the protocol it's like a stronger than saying it's just one half soundness it's kind of an extra structural property on on the security of this protocol. But in particular does imply that the protocols one half sound. And so what they show is that okay, let's parallel repeat this protocol lambda times. So let's sample the challenge, you know, randomly from the space of all lambda bit strings. Then what you can define is a different acceptance projector for each challenge. Okay, so now there's like two to the lambda of these acceptance projectors, which are basically formed by tensoring like all the accepting projectors of, you know, for each of the single, each of the underlying single repetition single instances right. And so, okay what they proved is actually that, you know, if the single repetition protocol has computational orthogonal projectors, then all to lambda of these projectors to find here are all kind of mutually computational orthogonal. Okay. And you can use that to prove that the parallel repeated protocol is negligibly sound. It's not really important to follow these details that basically what you are proving is that basically the expected value of, you know, of like basically the this expression, basically should, which represents like the prover accepting on challenge like CH right so the expected value over random choice of the challenge you want to show is negligible, which you can basically write out this expression, kind of squared and write out all all the terms. And what you use is what like crucially what you use is the fact that each of these cross terms is negligible, due to the computational orthogonal projector property of the underlying protocol. So this is what ACGH showed. So in our setting, you know, we would like to do the same thing. The crucial difference in our setting is that we have to define these acceptance projectors for the parallel repeated protocol slightly differently. Okay. We can't just like tensor the individual projectors together because we have a different accepting condition, you know, the verifier accepts actually, if, you know, if all of the challenge zero projectors accept, and at least half of the challenge equals one projectors accept. Okay. And so what this means is that we no longer have the guarantee that all of these like pi challenges are all mutually computational orthogonal. Okay, and particularly the prover can kind of pass two different like projectors cross by two different challenges if the challenges are kind of close enough and having distance. So the very like at a high level basically what we end up doing is we set parameters actually change a little bit how this challenge is distributed. And we set parameters basically so that it's not that all of the, all of these projectors are mutually computation orthogonal, but it's still holds an overwhelming fraction of them are, and that's still good enough to get the entire get the final proof to go through and show the whole sentence. Okay, so we end up with a situation where an overwhelming fraction of these pairs of of projectors are indeed computationally orthogonal. And so that's that was just what I wanted to say like diving a little bit deeper into the kind of the strength and parallel repetition there. Well, I'll just give a quick recap of what I've discussed. So, you know, the main goal of this paper and what I was discussing is to construct a blind CBQC protocol following this verify and then decrypt paradigm. Right. Because like this is exactly what's useful for achieving what I call this composable blind CBQC. And, you know, such a protocol that has many applications NPC in this new setting this kind of like this new setting of like you want to compute quantum functionalities between some quantum parties and some classical parties. Okay. And so the building blocks that we use are kind of this qfhe protocol the measurement protocol and then like crucially the CLLW delegation protocol for quantum sampling circuits. And then the three steps were to first parallel repeat CLLW to get a negligible sound protocol with quantum verifier. Then we add the measurement protocol on top of that to make the verifier classical, but we end up with only a one half sound protocol and then we can parallel repeat that using a strengthened version of the ACGH theorem. Okay, and we end up with what we want, which is actually just a four round negligible sound protocol with classical verifier. And that's, that's all I wanted to say here so thank you for listening.