 We're about to start our last session of the day before the IACR membership meeting, and before we can all go enjoy a good rum session. So I'd like to introduce our first speaker, Galadon from Weizmann, who's going to present joint work with Alessandro Kietza and Elonio Gav on the PCP theorem for interactive proofs and applications. Thank you. Okay, so I'll be talking about the PCP theorem for interactive proofs and applications, or you can be convinced by a conversation while barely listening even to yourself. So hopefully by the end of this talk, you'll never have to endure a boring conversation anymore. And this is joint work with Alessandro Kietza and Elonio Gav. So in an anti-proof system, a verifier wants to know the validity of some statement. The prover sends over some message, the verifier reads the entire message and decides whether it's convinced or not. And the PCP theorem tells us that if the prover encodes its message in a specific way, then the verifier can be convinced even while reading just a few bits from the proof. So an interactive proof is a generalization of an NP-proof system where the verifier and prover have some longer conversation with multiple messages. And in this paper, we ask, how would a PCP theorem look for interactive proofs? So basically, we would encode each one of the messages in some way and then have local access to each one of the messages, both from the verifier and from the prover. And this object is called an interactive oracle proof or an IOP, and it's the interactive analog of a PCP. So our main theorem says that you can take an interactive proof and transform it into an IOP with the same number of rounds. It's public coin, so the verifiers messages are actually not encoded, they're just uniformly random bits. The proof length is polynomial. Following the interaction, the verifier tosses logarithmic number of random coins to decide where to query. And at the end, it queries each message a constant number of times. Previously, this was only known for extreme values of k, so it was shown by Drucker for two rounds and by Condon et al for polynomial number of rounds. I will just mention that they didn't talk about IOPs, but the model that they were looking at is actually equivalent to public coin IOPs. Okay, so let's look at some applications of our theorem. Our first application is IOP to IOP transformations, so we get new generic transformations for IOPs that were previously only known for IPs. For example, if we want to reduce the number of rounds of an IOP, so we have a k round IOP, and we want to get a k over 2 round IOP, so we can take this IOP, treat it as an IOP, allow the verifier to just read everything, then use a classical transformation taking a k round IOP and transforming it into a k over 2 round IOP, then leverage our main theorem to get it back to a k over 2 round IOP. And similarly, we can get private to public coin for IOPs via the Goldwasser-Sipser transformation or perfect completeness in the same manner. Our second application is for hardness of approximation, so in a satisfiability problem you're given a Boolean formula and you're asked whether there exists some satisfying assignment for the formula. And the PCB theorem says that SAT is NP-hard to approximate to within a constant factor. Here we're going to look at a generalization of SAT called k-stochastic SAT, where the variables are either chosen uniformly at random or existentially and there are k alternations between these two. And the value of such a formula is the expected fraction of satisfied causes when the existential variables are chosen so as to maximize this value, this expected fraction. So we show that for every k it is IP with k rounds hard to distinguish whether a k SAT instance has value 1 or has value at most 1 minus 1 over O of k, so if you plug in k equals 1, you basically get the PCB theorem from this. And we improved this in subsequent work, this gap. And our last application is a commit and prove snark in the random oracle model. This is not a direct application of our main theorem but of tools that we develop along the way. So what is a commit and prove snark? Well, we have a prover and a verifier and along comes some committer that has a value x1 in mind and maybe, and it sends a short commitment of x1 and maybe another committer comes along and another committer, they each have different values and a prover who knows the values that the committers put under the commitments will send a short proof that they together belong to some relation and the verifier will then read these commitments or read the proof and then decide whether it's convinced that the prover is telling the truth about them belonging to relation. So we show that every relation on k tuples that is decidable in undeterministic time t has a commit and prove snark in the random oracle model with argument size that is polynomial in the security parameter in the number of committers and log t and the only assumption we use is the random oracle model no further assumptions are required. Okay, so let's get back to our main theorem. We want to go from an IP all the way to an IOP and we're going to do this in two steps. The first step we're going to get local access to the verifier messages. So we only have local access to verifier messages but we're still allowed to read the prover messages in full and the second step we're going to also encode the prover messages and then get our full IOP. So let's look at the first step. So let's assume for now that we have a one round proof that it's also public coin and let's denote the randomness by r and assume that we have a soundness error that's roughly one over r we can get this easily with standard parallel repetition of interactive proofs. So the proof looks as follows the verifier sends some random string the prover answers with some message a the verifier looks at the random string row and the message a and decides to accept or reject based on them. So here we want to get local access to this string row where we're allowed to read all of them anything that the prover sends. So let's use a simple idea have the prover send back the same value that the verifier sent it to just send back the same randomness. So now it sends back the same randomness but of course we can't really trust the prover it might send us some different random string. So we're going to add a some check that they're equal and this check has to be local. So we're going to choose an index and just check that row and row prime agree on this index a very simple check that's it. So let's analyze this protocol. Completeness is rather straightforward so I'm not going to get into it and let's look at soundness. Let's fix some constant delta. Okay so if row and row prime are delta far apart then we reject with probability at least delta because as long as we hit an index in which row and row prime disagree the verifier is going to reject and this is good for us constant soundness is good for us. So what about if they're close if row and row prime are delta close we don't have anything to fix this. So let's just hope that there are not many such rows that are close to row prime for which the prover has an accepting strategy. So how many row primes are there for which the prover has an accepting strategy? Well there's exactly beta times 2 to the r beta is the soundness error and it looks like this and let's look at all the bad rows the bad choices of row are the ones that are close to such a row prime which means that they are within some hamming ball of radius r times delta and this ball has size that is 2 to the h delta times r h of delta is the entropy function just think of some small constant and the problem here is that this we have to take this hamming ball around each one of the bad row primes and this easily covers the entire domain. So unfortunately we can't just use row prime. So what do we do? Well our main observation is that if row and row prime are close then row prime has to have high entropy this is because row is a uniformly random string and row prime agrees with it on most of its locations so it must be borrowing most of the entropy that row had and the idea is to extract randomness from row prime because we didn't really need to have row as a randomness we just needed some random string to play as the verifies randomness and when I say extract I mean using an extractor which is a function that receives a high entropy source and outputs an almost uniform string. So let's do that now we extract randomness outside of row prime get some row star and use that instead and this protocol would work assuming small extraction error which I'm going to ignore for this talk but unfortunately there are no deterministic extractors so we like these kinds of extractors don't exist we actually need to have some random seed as well and the prover needs to know this seed in order to answer for the correct row star and we're going to use a very good extractor that has seed length something like log 1 over beta and also a small error which again I'm going to keep ignoring so great this protocol works but we're back to the same problem we had before we're reading a few bits from row but now we have to read all of s and the point is that s is much much shorter than row used to be so let's take the same strategy that we did before and try it again so now the prover is returning is supposed to return the same seed and now we call it s prime and we're choosing one index to check whether s and s prime agree on this index just like we did before with row and let's analyze this so if and let's again fix a constant delta and assume that row prime has high entropy we've already argued that if it doesn't have high entropy then we reject with constant probability so if s and s prime are far then of course we reject with high probability and if they're close let's have the same analysis that we did before so now the bad choices of s prime are those for which after applying the extractor the prover has some strategy and this is the amount of such bad s primes is roughly the same as in the original protocol except for this extractor error that I keep ignoring and let's look at the new protocol so bad choice of seed is one which is close to such a bad s prime and now the hamming ball has radius that is something like log 1 over beta times delta which is much smaller so it comes out the number of strings around it come out to something like 1 over square root beta as long as delta is some small enough constant just to kill this O of log 1 over beta and so now the ball is very small and if we take all of the bad s's all the bad s primes take the balls around each one of them we get that this covers at most square root beta of the domain which is good for us because beta with some constant square root beta is some other constant okay great so we've managed to do this for a constant number of rounds and for one round and I claim that it works for any interactive proof so I have to extend it to multiple rounds and we just do this by applying the transformation again and again separately for each round it's exactly the same except that we use round by round soundness in the analysis rather than standard soundness okay so again we wanted to go from IP to an IOP and we went through this intermediate step of getting local access to the verifier messages but still reading the entire message that the prover sends and now we're going to do the next step we have this intermediate object and we're going to turn it into a full-fledged IOP so to do so we introduce a new object called index decodal pcps an index decodal pcp has four algorithms an indexer a prover a verifier and a decoder and it works as follows we have the prover and the verifier and along comes some indexer who has value x1 in mind and it sends an encoding of x1 and then along comes maybe another one sends an encoding of some x2 and let's say another one a prover who knows these values under the encodings x1 to x3 will send a proof that they all belong to some relation the verifier might toss some coins and then reads a few symbols from each one of the proofs decides whether to accept or reject the prover's claim that they that the values underneath the encodings belong to this relation so what do we require from this object we require completeness so if everyone is telling the truth everyone is acting honestly then the verifier should always accept and on the other hand if the verifier accepts then there should be something interesting going on underneath these encodings formally we say that if the verifier accepts the high probability then we can individually decode each one of the messages sent by the indexers so that they belong to some relation to the relation sorry so if the relation is non-trivial this means that there has to be something interesting going on underneath the encodings so we show that any relation are on k tuples that is decidable and non-deterministic time t has an index decode lpcp where the overhead of the indexer encoding is linear the proof link is poly t we have a binary alphabet and the the verifier makes a constant number of queries to each one of these oracles and the decodability bound the bound over which if the verifier accepts then decoding these messages out of the the encodings as possible is some constant so now we have how do we use this we have this intermediate object where the verifier reads a few bits from its own messages but still reads the whole prover messages and what we'll do is just encode each one of the prover's messages with the indexer and finally send a proof that had the verifier read the messages underneath the encoding then it would have accepted and the verifier checks this which requires reading a few bits from each one of these messages and i'm not going to get into the proof itself right now but at a very high level if a malicious prover were to be able to convince the verifier with high probability then because of the decodability of the index decode lpcp we know that there have to be some messages underlying these the prover's proofs that actually convince the verifier and we can use that to attack the original proof so now we wanted to go from an ip to an iop we did this in two steps first of all we got local access just to the verifier messages then we added the ability to have local access to the prover messages and i'll just leave you with one open problem so in this work we show that how to get transform a k round ip into a k round iop with o of k queries because we had constant number of queries to each one of the rounds subsequently we've been able to lower the number of queries to k over log n with a lower bound of constant of course and it's still open whether we can get from a k round ip to a k round iop with a constant number of queries overall and that's it i'll be happy to answer any questions if you have any any questions from the audience do we have any questions okay so i'm curious about the application to commit improved snarks you mentioned the parameters could you elaborate a little bit on how those parameters compare with other commit improved snarks uh well other commit improved snarks are not in the random oracle model so they're not as good as commit improved snarks from uh specific assumptions um but we use only the random oracle model okay thank you so let's thank the speaker again