 Thanks Stefano and good morning everyone. So I'll be talking about arguments of proximity. It's joint work with yeah, Kali So the setting that I want to consider is the following we have Alice who let's say is a medical researcher or social scientist And she has access to some huge database. So some she's a medical researcher some public health information or Maybe it's the Twitter graph if she's a social scientist So really huge database and she wants to run some sort of statistical computation on this database Unfortunately Alice is only a very poor researcher. She doesn't have access to expensive computer computing machinery and she can't afford To download this database or even to just to run in time that is linear in the Databases size so she can't even read the entire database, but still she wants to perform a computation Thankfully nowadays we have this notion of cloud computing. So what we'd like is To have Bob who thankfully owns a lot of servers very powerful servers and Alice would like to have Bob perform the computation for her So we have these powerful servers. They have no problem to download the entire database to perform the computation and Once the server has the result it could communicate it to Alice So that's great The only problem is that Alice doesn't want to entirely trust the server So she's afraid that either maybe the server is just lazy and just sending some default answer without actually performing the expensive computation Or maybe the server is even being totally malicious. It's intentionally trying to Mislead Alice into a wrong answer So Alice wants to be able to verify this claimed result that the server The server claims is true So there's been a lot of work on outsourcing computation in a way that you can verify in linear time Which is which is of course very remarkable because just performing the computation itself could could take a lot more than linear time But in our setting that won't do because as I said before Alice can't even afford to just run in linear time in her input So we want sub linear time verification So this notion was first considered about in a paper about ten years ago by Ergun Kumar and Rubenfeld And it was recently revisited by Guy Rothblum, Vodhan Wigderson two years ago And the question that these paper asked is can we verify without even reading the input? The reason that we can't even read the input is that if we want to run in sub linear time in particular we can't even read the entire input and Maybe the initial reaction to this question is an obvious no, right? We can think of a function that is Very sensitive to bit flips So think for example of you want to check if the parity of the database is zero Then the server can always just you know flip a random bit and supply a proof that corresponds to that tweak database And if you run in sub linear time your chances of catching that are extremely low So it seems as though solution to this problem Can be had But thankfully this is possible if you're willing to allow for an approximate answer and the notion of approximation that These two papers took and that we follow follows the property testing literature and It relaxes the answer in the following way So we want to force the or require that the verifier Reject every false inputs, but rather only inputs that are blatantly false So very look very very different from correct inputs And the picture to have in mind is the following so we have our language L Which is this blue area in the picture and the area that is this is sort of a Inputs that are in the language and the red areas is that inputs that are far from that are not in the language Now we're also going to look at the area that's close to L So we're going to measure distance in hamming distance look at inputs that are very other say you need to Flip in most point zero one bits in order to get an accepting input So this gives us this new purple area What we acquire from our verifier is to accept every yes instance. So every instance within the blue area To reject every instance in the area that's currently in red and in the purple area We have no guarantee that these inputs that are Relatively close to L and I want to stress that this is really a new type of proof system and What the verifiers convinces not that X is in the language, but rather that X is Close to an input that is in the language. So it's a proof of proximity to the language and in fact, you can think of this as An interactive proof variants of property testing. So we're augmenting the property testing Framework by adding an online interaction with an powerful but untrusted prover More formally This is the setting that we have we have our verifier Alice She wants to check whether an input and then a statement of the form X belongs to a language L She has random access to to the input X. She has online interaction with a powerful but untrusted server And now we have the standard completeness requirement which says that if X belongs to the language then Alice will accept to see high probability But now instead of saying that Alice needs to reject every false statement who say that Alice only needs to reject statements that are far again and having distance From the language. So if X is epsilon far from the language, no matter what the cheating prover does can be computationally unbounded Alice is going to reject with high probability So that's a notion of an interactive proof of proximity or IPP some point parameters to bear in mind Within this model are the query complexity, which is the number of queries that Alex makes turn input X the amount of communication with The prover something that would like to minimize the number of rounds of interaction, of course We don't want to be too large and obviously the running time So we don't want the verifier to work too hard in particular We'd like the verifier to run in sub linear time and also The application that we have in mind for outsourcing computation. We also don't want the honest prover To work too hard. So in this model cheating provers can be unbounded But we'd like the honest prover to be say polynomial time So in this paper by RVW, they gave this the following results So they constructed IPP protocols for any language which is computed in bounded depth so every language in the complexity class and see has an IPP with a poly logarithmic number of rounds and Roughly square root of n query complexity communication complexity and verify a running time The proof runs in polynomial time So what I find remarkable about this result is that it captures a very large class of functions This is in contrast to the property testing Setting in which typically for every different problem. We have a completely different solution Here they give a solution that handles any any problem within this very large class But still you could ask Is this result tight? What do I mean by tight so you can think of a few things What about the number of rounds? So here we have a poly logarithmic number of rounds. Can we do it in less? Maybe just a constant number of rounds one round What about the class of languages here we had and see that's already a lot that can we do any sort of polynomial depth computation rather than just bounded depth and Last thing may be most interestingly in the rvw result the verify runs in square root of n time That's already sublinear and it can be is a remarkable saving But we can be greedy. We can hope for maybe poly logarithmic running time Maybe even constant running time. So can we do these things and in this work? We tried it to look at these questions and we have We have an upper bound and a lower bound Our upper bound addresses or at least partially solves The first two questions so we give a construction that simultaneously Both reduces the number of rounds just a single round of interaction So the verifier sends a question to the server gets back an answer and that's it and in addition we We extend the class of languages that we can do this for from NC all the way to P So any polynomial time computable or any language that you can decide in polynomial time But we did this dumb does come at a cost. So in the RVW result soundness was Unconditional so for every X that was far from the language no matter what the cheating prover did The verifier should have should have reject with high probability Here we're going to restrict to only to computational soundness. So we're only going to care about Cheating servers that run in polynomial time and in particular. They can't break crypto and I'm going to talk More about both both results, but let me first stay with what the the second result It's a lower bound and it addresses the third parameter that we wanted to improve the verifier's running time so actually show that the verifier's running time in the RVW IPP is In fact inherently square root of n so this can't be improved So there is some language in NC in fact an NC one for which square root of n Running time is necessary and in a sense this result also holds in the computational And deciding with computational soundness and I'll talk more about this The lower bound I mean Okay, so first things first. That's I want to talk a little bit about the upper bound. Let me formally Define the model Following the classical literature. We call this an argument of proximity It's no longer going to be approved the verifier is not convinced Entirely she she only knows that the statement is true as long as the Cheating server is computationally bounded. So we're basically going to completeness remains the same the model also remains the same What we're relaxing is comp is soundness So as before we're only looking at X's that are far from the language But now instead of considering every cheating prover strategy We only focus on efficient polynomial time cheating provers and for such provers when they interact with a verifier on an input That is far from the language the verifier should reject So that's an argument of proximity and it actually turns out that these are fairly easy to build if you allow a little bit of interaction so by looking at this classical result of Killian followed up by by Mikali that they have a result of a classical argument with linear time verification and it's based on PCPs together with collision-resistant hashing and It turns out that there's a notion of a PCP of proximity and if you use which was constructed in a work by Benson Sonnet out 10 years ago, I guess And it turns out that if you replace the PCP in the QDM Mikali construction with the PCP of proximity that already gives you a Sound protocol. So what you would get is the following theorem you get it's based on collision-resistant hashing And what you get is a four message argument of proximity for every language, which is an NP where the query and communication complexity and the the verifiers running time are all poly logarithmic and Also, the prover is they're going to be efficient as long as it gets the NP witness Okay, so so that's great, but that is you know for for messages or two rounds and questions whether we can really reduce this all the way to one round And what we show that is that this is indeed the case So we show a one-round argument system or argument of proximity for every language in P Okay, so the Cryptographic assumption that we're going to assume is sub-exponentially hard fully homomorphic encryption, which I guess is a relatively mild assumption nowadays and What we obtain is the following that every language in P has just a one-round argument of proximity where the query query complexity communication complexity and very verify running time are also linear Specifically, they're into the two-thirds and the prover runs in polynomial time So you might wonder where this end to the two-thirds come from comes from and we also wonder this though We think this can be improved to square root of n, but we haven't Sort of fully prove that yet, but we believe that this is possible Squared event as an RVW So I want to have time to show you the the full proof, but let me just get so show you sort of a quick outline of the proof So the idea is first to construct an information theoretic object Which is in a multi an MIP of proximity a multi-prover interactive proof of proximity Which is an extension of an interactive proof of proximity where you allow multiple non communicating servers In fact, we are going to let them sort of communicate in this No signaling sense that we heard about in the in the previous talk It was a the no signaling was a was with a different spelling, but it's the same notion But anyhow, I won't I won't get into this construction Let me just mention that it's sort of a non-trivial combination of a protocol that we had in a result together with yet with Cali and the last year together with this RVW construction Takes protocols from both and combines them in a In a non-trivial way. We have to work a little bit to make it work Once we have this MIP of proximity We can transform the ladder using More or less tools that are all are already known in the classical setting using fully homomorphic encryption This is a heuristic that was suggested in a work by a yellow bat Ostrowski and Rojbo Polan in 2000 and it was shown to be secure in a work two years ago again with Cali and Lars Assuming that the MIP has this no strong no signaling sounds property Okay, so that's the first result The second result is the slower bound that I mentioned in fact, it's it's going to be two lower bounds and Here we're also going to have a cryptographic assumption Which I don't see why why I see why our proof needs it But I don't see why for the result it would be needed and I think it's very interesting to try to get rid of this assumption Regardless the assumption is the following so we assume an exponentially strong length-doubling cryptographic PRG which is computable in NC1 and If you're worried about the this exponentially strong if you're only willing to assume less a sub exponential this only deteriorates our lower bound, but it will still the lower bound will still hold with The parameters will be slightly worse Assuming this assumption we show a language in NC1 so very low complexity class For which every interactive proof of proximity requires square root of n complexity say verify a running time I should know that RVW already had a lower bound But their lower bound only worked for count constant round protocols And in fact the quality of their lower bound deteriorated deteriorated with a number of rounds Our result is sort of independent of the number of rounds this can be square root of m rounds for all we care The second result is an extension to this notion of a one-round argument of proximity It says again that there's a language for which every one round argument of proximity require a square root of n complexity If we allow two rounds we have this result. We have the result using PCPs of proximity which overcomes the square root of n barrier But there are a lot of stars there So the stars say basically say that this only rules out constructions that are based on standard assumptions and use standard proof techniques and I won't get to exactly what's written over there, but this is Both based and similar to a result by Jan Trian-Wicks from a few years ago So I want to give you a little bit of intuition about the slower bound So the idea is to look at a hard NP language. Let's stay for a minute Let's think of the language as being 3SAT. We don't know how to make it actually work with 3SAT But let's think of that for a second And let's consider Related language which is actually quite easy to compute so the inputs of this language consist of pairs of X's and W's. The X's are an air correcting code for a second. You can ignore that but think of the Instances of being composed of instances and witnesses and what do you want to check is that the witness is valid? So think of you're getting a formula and an assignment and you want to check that the assignment satisfies the formula This is this language L easy The intuition in the IPP setting is because the verifier can only read a limited number of bits from the witness It has to decide whether X's in the language only given sort of part of the witness Which already seems to be something hard in addition to reading parts of the witness. It also has online interaction with With the prover which also supplies it with some bounded amount of information potentially about the witness. So we need What we're looking for is This the properties that we need from this hard language L hard is that it remained hard even given both The algorithm that's trying to decide L hard is given both oracle access to say square root of n over a hundred bits of pretend an alleged witness Okay, so square root of n bits of the satisfying assignment Ledger satisfying assignment and in addition square root of n communication within all powerful but untrusted prover Okay, so that's the properties that we need I want to have time to show you the The actual construction of the language as you can assume it's based on this cryptographic PRG that we assumed Okay, so Just to summarize what we showed is a protocol that achieves sub linear time Verification for any language in P and it uses only one round of communication But it does assume computational soundness on that the cheating Provers are computationally bounded One corollary that I didn't have time to go over is even if you don't like this model of sub linear time verification You can use this result in order to get linear time verification Which is something that you would think we already knew but actually all the results on linear time verification typically achieve only quasi linear time using This this results on sub linear time verification You can actually get exact result without the proximity with exact linear time verification, which is just a cute thing to know That's the upper bound that we have In terms of a lower bound we show that square root of n is in fact tight for these interactive proof of proximities For a language in NC one and in a sense for arguments of proximity for P if you're interested specifically about in no signaling MIPs then we also have a lower bound information a lower bound there for no signaling MIPs of proximity and Just to open questions that I'd like to conclude with it's first of all what about Constructing better proofs or arguments of proximity for more languages or more classes so Some progress on that has been made so in a joint work with Goldreich and rule We have a construction of interactive proofs of proximity for sub for certain subclasses of NC with say polylogarithmic complexity rather than the square root of n But I think getting even for like Specific properties that people in the property testing literature are interested in Getting improved results specifically for those would be very interesting The other thing that personally you really bugs me is the fact that our lower bound for interactive proofs of proximity is Based on a cryptographic assumption. I would expect a lower bound. That's really both information theoretic and somehow Based on the combinatorics of the problem rather than on The computational complexity So that's it