 And then he encodes all these variables. So he sends to Alice the encoded variables. And then we want to have the compiler to be somewhat sensible enough to allow Alice to evaluate some functions in it. And finally, Bob can encode the evaluator to the encoding heap write and retrieve bad Alice's answers. So this procedure, we call it Spruke for compiler, which has an encoding algorithm, and we want to have, on the one hand, a flexibility enough to allow Alice to evaluate these functions. But on the other hand, we want it to be somewhat secure enough so that whatever malicious Alice could do could always be expressed as the local functions. So after decoding, we want to have the guarantee that still each answer depends only on the respective question. So Spruke for compiler has been implicit in previous works. And specifically, the following idea was suggested. So for encoding, Bob is going to sample any dependent keys. And he's going to encrypt each query under the different key. And here, Bob is going to use a Boolean one of the encryption schemes. Now, since the scheme is Boolean one of the key, this allows Alice, in turn, to evaluate function on each of the cited texts. So at the end, Bob can just encrypt each of the evaluated cited texts. And since this is a fully homomorphic encryption scheme, we get that each answer is the resulting function that Alice computed. And this is somewhat tempting to think that since we have the security of the encrypted scheme, then all that Alice could do is just apply some local functions. Because homomorphic encryption scheme allows volatial functions and we use independent public keys. So we can hope that Alice could not somehow mix the ciphertext together. And that's why I've been suggesting that this is not the case. And in work, quite work, they show that the security of the encryption scheme is insufficient to apply this locality property that you want. So using the technical terms, the security of the encryption schemes allows a property that is called non-signal. Meaning that each answer, the distribution of each answer, cannot depend on the other verbs. So it's just a technical language. And functions that are not signaling, but also not local, are spooky functions. And we want to avoid this kind of spooky function. So this is called a spooky free compiler. In here, the source for the name. And more recently, there was a specific example. And they showed a fully homomorphic encryption scheme that even though you encrypt your ciphertext, other different public keys. Still a malicious Alice could evaluate functions that are not local. And they completed the result with constructing a spooky free compiler. But the construction was based on knowledge assumption. So here's the formal definition of spooky frames. So we define it using two experiments. The first one, which we call the real experiment, has a parameter A. And it goes like this. So Bob communicated with the adversary. And first he samples the key and sent to the adversary the encoding of the queries. Then the adversary incrying more of the encoding and generates some evaluated encoding keep track. So finally, the result of this experiment. So we have the queries Bob encoded. And then he coded answers. So the second experiment that we have, which we call the simulated experiment, is sort of the ideal one. So in this experiment, Bob doesn't talk to just one adversary. He talks to independent simulators. And the communication goal, like we started with, and this is kind of the ideal simulation that we want. So Bob sent all his queries. And then all the simulators send their answers. And the simulators cannot communicate with each other. And similarly, the output of this experiment is, again, of queries. And he answered that Bob got. And we say that the spooky-free compiler is spooky-free. If for every adversary, for the first experiment, and for every distinctive side that tries to distinguish between the first one and the second one, there always exists some simulator that whatever side can understand from the real experiment could always be simulated, or some simulator S. So this is the finishing capture of exactly what we wanted. And he showed that whatever an adversary could do could always be expressed by this local simulator S. And since the S simulators cannot communicate with each other, then the functionality that they would apply is always local. So this is the definition. And here's the main motivation for constructing such a spooky-free compiler. So when you speak about Mb delegation, we consider it falling. So it's the communication between a verifier and a prover. And they want to decide whether an instance S is an Mb language hall. So in the honest sense, we assume that also the prover knows the rules. And communication goes in two rounds. So first, the verifier sends the queries to the prover. Then the prover has to come up with an answer. And then the verifier needs to decide whether X is the language or not. And we want the following properties. We want complainness, meaning that if the instance is Mb language, then in the honest settings, the verifier would always accept. But we also want soundness, meaning that if X is not in the language, then for every efficient adversarial prover P, the verifier would always reject the time of probability. And we can consider two flavors of soundness. And in the first one, which is the more common one, we first assume that first the instance is S. Only then do the verifier send the queries. So the instance is independent of the verifier's queries. And this setting is called selective soundness. But we can also consider a different flavor of soundness. We first verify a S to send the queries. And then the adversary could get to use the instance. So the instance could depend adversarily on the queries. This is a stronger notion of soundness, which we call adapters. And last but not least, the most important probability that we want is we want the whole communication and the whole protocol to be very succinct. Specifically, we want the length of the queries and the answer to be sublinear in the length of the witness. Because otherwise, this sort of protocol becomes true. And there are few known constructions for such a delegation scheme. But all known constructions are either based in the random argument model or based on knowledge assumptions. And there are no known constructions from standard assumptions, say factoring or LWU. A more common assumption with knowledge assumptions. But if we slightly change the settings, then things come much better. Then it is known that if the verifier is allowed to talk with any independent provers, and again, we assume that each prover knows the witness, then if we again have two rounds of communication where first the verifier sends all these queries, then the provers have some answers. So if we assume that the prover cannot communicate with each other, then there is a known theorem that there exists just such protocols that are complete. They are un-conditionally sound. And they are very succinct, specifically to the communication's quality algorithm. So this is the main motivation. And the idea is to take one of these multi-prover protocols and to use the spooky free compiler to compile all the protocol into single-prover seconds. So specifically, we're going to have something in place. So first the verifier is going to sample and encode in here. And queries for the multi-prover protocol. And it's going to send the encoded queries to the prover. Then the prover is going to evaluate the proofs. And again, the proofs come from that multi-prover protocol. And finally, the verifier simply has to encode to provide the answers. And you can see it can prove that this resolving a location protocol is also sound. And why is that? And since we use a spooky free compiler, then all that the adversarial could do could almost be expressed as a local simulator. And local simulators or any local adversarial is exactly what our first multi-prover is sound against. So whatever an adversarial could do could also be described by local functions. And our multi-prover protocol is sound against that. And this gives us soundness. And it's an open question if we get actually an adaptive soundness. And I'm not going to it, but it's very not current. And it doesn't seem to like that. This protocol will also be adaptive. So what we showed in our words is the following. So we have two results on spooky free compilers. And the first one is sort of the bottom thing. And it shows that if you allow the evaluated encoded to be exponentially long in the length of the answers, then we can actually construct spooky free compilers for many FHE. However, if you look at these results, you see that the resulting delegation scheme is not succinct anymore. In fact, it's right here though. So this shows that even though we can define spooky free compilers without talking about delegation, it's in that these two notions are somehow connected. And talking about spooky free compilers in a very large range of parameters is kind of sort of easy. And for the second result, we have some bad news. And we show a negative result. So if you try to have a volume where it's slightly shorter than that, then we have a black book separation. So our black book separation is for many fossilifiable assumptions. And I explain what fossilifiable assumptions are in just a few slides. So for now, we can just do a bit of a factoring or a digital assumption or help the group. And in fact, we show something that is a bit stronger. And even if you consider a spooky free you can lie with the restricted functionality. And if you want for the evaluation to support essentially single function or the encoding to encode squares come for a specific distribution, then our negative results still apply. So what are black book separations? So say that they want to show to you that if we assume that factoring is hard, then my spooky free compiler is indeed spooky free. Equivalently, you can say that if you give me a spooky adversary, then I can construct for me an algorithm that solves factoring deficiencies. And this looks something like that. So I have my reduction, and my reduction takes a good put in an adversary. And if indeed you give me an adversary, then I can construct an algorithm that's given a larger integer and factors. So it's usually the settings where my detection uses only oracle access to my adversary. So this sort of reduction is called a black book reduction. We show that you cannot construct a red book reduction based on factoring. And you can replace factoring with any assumption that could be described as a game in a genre in the house. And these kind of assumptions are both falsifiable assumptions. So we show that you cannot construct a spooky free compiler with a black book reduction based on only a main falsifiable assumption. So this is the high level proof. So our starting point was an anomaly possibility result by 20 weeks. And they showed a black book suppression for adaptive validation. And the proof goes something like this. So we start with an inefficient adversary A. And since we allow the adversary to be inefficient, then it can be shown that there always exists such an adversary. Then for the second step, we simulate this inefficient adversary A with an efficient 180. Now, A tilde does not necessarily break the sum of the delegation part at all. However, to the height of the reduction, it cannot tell whether it's communicating with A or with A tilde. Now, since our efficient adversary A breaks the sum, and then we have the guarantee that on the left side, this also breaks the assumption and say it could break factoring. Now, since the reduction cannot tell whether it's interacting with A or with A tilde, we get that also R that has access to A tilde also breaks the assumption. However, A tilde is inefficient. How great then in the reduction is also efficient. Then we get that it could also break the assumption essentially, which is a contradiction. So this is the result of a gentler embrace. And we want to apply this sort of ADS in our settings. Since Pucifil compiles also apply delegation, we could hope to also apply this sort of ADS. I forget the two problems. First, the impossibility result of the gentler embrace applies only in the adaptive settings. And as I said, in Pucifil compiles, you also just select your sums. And it's not known if you could extend the result of a gentler embrace to the selective settings. But we also have one hearing problem. So the main idea of the second step of gentler embrace is to simulate the inefficient adversary end with an efficient point A tilde. And simulating should sound bad when considering a Pucifil encryption. Because this is the exact opposite of what we want. So if we set the simulator to be A tilde, we can get an efficient simulator that simulates our adversary A. And this is the exact opposite of the definition of a Pucifil compiler. So this is not so good. So we can think of this time for the first time to construct an efficient algorithm that break this quickness. So here's the solution. We allow the adversary A in the skill-sharp side to share some how-to-coded string signal. And we are going to construct a signal like that. So for every simulator, it cannot simulate signal. And the important thing is, notice that we have sides of sigma on two sides of the sort of equations. So even though we replace the adversary with a simulator S, it still has to convince our distinguisher of sine. And sine has this very complicated string sigma, which doesn't allow any simulator to simulate it. So if our adversary cannot be simulated, it means it's broken. And we are back to the business since if it's broken, then in reflection, we give them black-box access to the adversary and the distinguisher to break the assumption. And the second step is also similar to the one made in two weeks. And it's just to simulate our adversary. And for simulation, we replace sigma with another string sigma tilde. And now sigma tilde, along with a slight modification for A in sine, allows us to simulate A in sine. And again, since the reduction cannot tell whether she is talking to the efficient or the inefficient adversary, then we get that also R given to the efficient adversary also breaks the assumption. So again, we get an algorithm that is efficient and breaks the assumption. So I have a few more slides and five minutes left. So can I continue? So I'll sketch you on a very high level on how our construction looks like. So we start with defining the two languages, L and L bar. And L is simply going to be the image of a pseudo-run generator. And L bar is the thing that is not the language. And since we hear a pseudo-run generator, then L is indistinguishable from L bar. And how is our adversary going to look like? So we have a reduction. We have the adversary in the side of the extinguisher. And we promised a common string of sigma. And sigma is going to be this way, where we simply sample and independent samples of indices that are not in the language. So what is the adversary? The reduction set in coding. And the adversary do these encodings as encoding of a multiprover and product. So what does it do then? Then it samples an index between one event and sends a fake proof for that instance X bar sub i. And there always exists just a fake proof. See that E prime succeeds. And it's also known as the dense model superior. So all you know, our inefficient adversary is going to send a fake proof in an index i. Then the extinguisher is given queries that code answer impedance. Simply looks at the i of the instance and the string signal. And he verifies using the multiprover protocol whether X bar sub i and the query answer are indeed a valid result for the valid proof. And then sends the answer. So very quickly, why this adversary? So here we have that a sends a fake proof. And Psy verifies this proof. And since these are fake proof, then verifier sends this i probably. However, when you consider any simulator, by definition, we have that the simulator is local. And for any index that you choose, we always have the instance not in language. So in order for the simulator to make Psy output one, it must produce an accepting proof for the instance not in language. And since the simulator is local, it cannot do that simply because the sum is of the interprover protocol. So we get that the simulator cannot simulate this proof. And very, very briefly, our efficient adversary is done by simply replacing our string sigma with indices that are in the language. So we sample X to be in the image of the simulator in general, the way we also have the witness. And now A could simply generate honest proofs for the instance. And Psy would need to verify. And finally, we kind of have tweak A and Psy a little bit together. These two things are indeed instinctional. So this is it. And thank you very much.