 hello everyone, my name is Eliatz Fadhea and I'm going to talk about tight-pal repetition theorem for partiallyraisal arguments. This is a joint work with E.Tai Berman and Iftach Heitner. So maybe it's not clear from the title, but this work is actually about hardness amplification, which is one of the most challenging and exciting key topics in computation. In general it means can we turn a weekly hard primitive, say a weekly secure algorithm or protocol עוד חד המשכיר הזה. פעולה, אנחנו רוצה לעשות את זה, while preserving other desired properties of the original primitive for instance efficiency. And there are a vast of examples from different fields in cryptography weak to strong one with functions or puzzles in complexity theory, worst case to average case reductions of functions, languages, also in interactive protocols, reducing the sound error of interactive proofs, סינגל פרובר או מני פרובר, וגם אינטררקטיב האגרומנט, שהן פוקוס של היסטוק, גם נראה קומפטטיינל סנט פרובר, היסטר פרובר, WHERE THE SUN'S GARITY only holds against computationally-embounded prouverts. So, let me briefly remind you what are interactive arguments. So, in an interactive argument, we have a prouver P, a very prior V, usually a statement X that the prouver wants to prove. So, they start interacting with each other, and at the end, the verifier either accepts or rejects. And such a protocol has a better sound cell, if on invalid statement X, no matter what efficient strategy P starts the prouver chooses, it cannot convince the verifier to accept, with probability more than better. Usually, if we aim for a small beta, say negligible, but sometimes we have a protocol with beta equals half, maybe something slightly less than one, and we want to reduce it by amplification. So, a part of being an important type of interactive prouve system, interactive arguments are important because they give us a paradigm to capture the security of many cryptographic primitives, because the security of these primitives can be casted as soundness for related arguments. Let's see an example for that of a statistical hiding commitment scheme. In such a scheme, we have a sender S, a receiver R, and the sender has a message M. In the commit stage, the sender commits on the message, and in the reveal stage, it decommits. And usually, in such a scheme, we have a binding guarantee that efficient S cannot decommit into two different values, except with some small probability. So, how we can see binding and soundness, we can think of S as the prouver and R as the verifier, and R the verifier, except on two different decommitments, and the sound cell is exactly the probability that S can decommit into two different values. So, in this work, we're interested in amplification of 50-active arguments, where the main goal is to find a generic transformation to reduce the soundness of any argument. So, let's say we started with an argument PV that has soundness L1 and Sepsilon, almost no soundness guarantee. So, we want to turn it into a similar argument using, with soundness L1 and negligible, and we want to do it using a generic transformation. Since otherwise, each time we have a new argument, we need to come up with new methods to amplify its soundness. If we have a generic method, then automatically we can improve the security of many cryptographic primitives, for instance, the binding guarantee of a statistical hiding commitment. So, the most natural way of doing such a generic amplification is by repetition, okay? Hopefully if we repeat the execution many times and the prouver succeeding all the execution, this will be harder. So, the first option is to do a sequential repetition. Let's repeat the execution, the end time sequentially, one after the other with independent coins, and the verifier will accept in all the verifiers in each of the execution too. This automatically reduced the soundness to the power of N, okay? But the problem is that it blows up the round complexity, now the number of the interaction rounds is N times more. So, the alternative way of doing a repetition is by parallel repetition, where in each interaction round, each party sends N messages that correspond to the N executions. And again, the verifier accepts in all the subverifiers do, and by that we preserve the round complexity, and so this looks great, very promising. But does it improve the security? It turns out very surprisingly that not in general. There are concrete examples of arguments that has soundness errors say half. You repeat them million times, and the sound error remains in parallel, and the soundness error remains half. So, why this is important, because let's go back to the statistical high-end commitment example. And now suppose we have a very weak binding guarantee that S can decommit into two different values with probability one minus epsilon. So, the amplification will be that S will commit to the same value many times in parallel, hopefully decommitting on all these values will be much harder. But by how much, if at all, the binding is improved by this idea. So, it turns out that if we don't know exactly what happened inside this scheme, we cannot answer this question, because as we said, parallel repetition might not reduce the soundness of arbitrary argument. On the positive side, parallel repetition does improve the soundness in some special cases. For instance, free message arguments, public coin arguments, both of them in optimal rate. Also, random terminating variant of any argument, which we talked about it in the next slide, along with parallel repetition, this is the only unconditional method to amplify summits of arbitrary arguments in the random possibility manner, in particular, the binding of a statistical hiding commitment. Also, partially-simulatable arguments, which is an extension of random terminating argument, that captures the property of all non-arguments that parallel repetition does improve the soundness. And there is another method using full-grain-momorphic encryption that actually makes parallel repetition to reduce the soundness by an optimal rate. But the downside of it, it is conditional, only works assuming, say, that the AW is out, and non-merebox in the protocol, we need to compile it gate by gate, therefore it's inherently inefficient. So what is the random terminating variant of the argument? Let's suppose we have an argument PV, which has M rounds, and now we slightly change the verifier, such that in each interaction round, it flips a coin, which is one for BT1 over M. If the coin is one, it says to the river, okay, I accept, we don't need to continue with the interaction. Otherwise, it continued on the next round. It flipped the coin again and again and again until the end. If none of the coins were one, then it just acts like the original V and acceptive V does. Okay, so this looks like a very stupid thing to do, because now the proven has a better chance to win. So basically, if we start with an argument, which has very weak sinus guarantee, let's say sinus L1 minus epsilon, then as this is the sum of this random terminating variant, just one minus epsilon of 4 is essentially the same. But what's interesting about this random terminating argument is that no matter what argument you started from, if you look at the random terminating variant of it and repeat in parallel, then Haidtner showed that the sinus L actually reduced by an exponential rate, okay? Which is great, but the only problem is that it's still factored by m to the 4 of the epsilon in the exponent from a twin decrease in rate of one minus epsilon to n, okay? So after this work, it's still left open whether there is a better generic method to amplify something. So maybe this method achieves a much stronger decrease in rate, but the Haidtner analysis is just not tight. So our results that it turns out that this is indeed the case, Haidtner is always not tight and the sinus L, when you take an argument, look at the random terminating variant of it and repeat in parallel, actually decreases by a much stronger exponential rate, only n over m, only m far by a dream rate of m. And this is a significant improvement over Haidtner analysis. This immediately yields a tight bound for many specific amplification, for instance, statistical hiding commitments. This is also generalized to partially stimulatable argument, we talked about it next. And our second result is a matching lower bound. We show an m-round argument PV, we still must have one on the epsilon, such that if you look at the random terminating variant of it and repeat it in parallel n times, then there is an existing attacker PN star that can convince all these n random terminating verifiers to accept before meeting one-round steps to the n over m, which means that we cannot hope that the random termination, this approach, will achieve us something better than n over m. This m factor is really essential. So how we prove parallel petition theorem for arguments. So let's assume we have some argument PV that has some as error one on the epsilon and now let's assume that there exists an n instant cheating P over PN star that can convince the n instance verifier VN to accept with probability larger than epsilon prime, where you can think of epsilon prime as one minus epsilon to whatever you want to prove. And the task is somewhat to use this PN star for beating a single instance P star, such that it's convinced the single instance verifier to accept with probability larger than minus epsilon in contradiction to the sound guarantee. And this is hard because the summer need to take this very weak adversary PN star that only wins with this very tiny probability epsilon prime, which is just one minus epsilon to the power of let's say million. And some of those form it into a single instance Puber that wins the probability almost one. Okay, so how we are going to do it? So we have this PN star in our hand. So this is the usual also previous work, do it the same. And we somehow need to emulate an execution with this PN star, but we are playing against a single verifier. So we are going to choose a random position J for the real verifiers we are actually playing with. And now the picture looks like that. We are a P star. We have this PN star in our hand. We are going to emulate the other verifiers V minus J. And this VJ is going, the real verifier V is going to take the role of this VJ. When the real verifier V sends a message, we treat it like VJ sends a message to the Puber. We somehow choose messages for the emulated verifier for the other verifiers. We put all this message inside this box PN star. We get the answers and we forward the J-penser back to V. And the only thing I didn't describe here is that if this is the main issue is how to choose the messages for the emulated verifiers. So again, our goal is somehow to choose the messages for the emulated verifiers such that at the end of this execution, at the end of this simulation, this PN star is going to win all verifiers with high probability and in particular, the real verifier, which is the VJ. So in the first trend list, the best thing we can hope for given the message of the verifier is to sample a random winning execution that consistent with this message A1J, okay? Meaning that we sample a random execution condition on PN star wins all verifiers. When we have such a random winning transcript, we set them the first messages of the emulated verifiers accordingly. And basically we hope to do the same thing in each round. Given the previous messages and the message, the IS message of the real verifier, we somehow want to sample a random winning execution that consistent with the previous messages and this message, okay? And again, given this random winning execution, we want to set these messages of the emulated verifier very first accordingly. So first of all, is it a good attack? So it turns out the TS, okay? This is a good attack. If you can implement it, in fact, the parallel, this emittivity, yes, the parallel repetition reduces the sound so by an optimal rate of N, but can it be carried out efficiently, okay? And this is the main issue, so not always. It turns out that not always and now we are going to see an example. So let's first see an example when we can implement this attack. This is in the case the protocol is from the coin. So in public coin, when the verifier sends message, just random coins, so we know it's the entire state. So what we can do it, we can emulate a random continuation of the entire execution until the end, where we also emulate the real verifier V. And at the end, we are going to see whether in this random continuation, the PN star wins all verifiers or not. If it wins, then we are going to take the first messages of the other verifiers accordingly. Otherwise, we are going to repeat the process again and again doing random continuation until PN star wins all verifiers, which eventually will happen assuming that the PN star wins with not too small mobility. So this is great, but in general, this process is inefficient. For instance, consider an argument in which the verifier in the first message decommits on some bit B and in the second message it decommits. So now, when we want to do random continuation, we need also to emulate the verifier, also the second message of the verifier. And the second message it needs, we need to emulate the decommitment, but we cannot do it because we don't know the internal state of the verifier. Okay? And this is the main problem. But for random terminating arguments, we can do something very interesting. So given the first message of the verifier and maybe it's committed to something else, what we can do, we can do a random continuation condition on the verifier, on the 1 over m may event that verifier are both in the next round. Okay? Given the verifier are both, then we don't need to decommit, we don't need to do things that we cannot do. The verifier is now out of the game. We only need to emulate the other verifiers until the end and then we'll repeat the same process. Okay? But this yet that actually we are going to sample a random continuation, condition not just on winning, but also condition on winning and about the verifier both in the next round. Okay? This is not exactly the distribution that we want. We really skewed the distribution, but this is the only thing we can do. And this is why parallel partition in that case does not work exactly as we expect. And in general, we can generalize this property to data-simulatable protocols in which it means that given the message of the verifier, there is some event over the random coils of the verifier which will be at least delta, such that if we condition on this event, then we can do the emulation of the verifier till the end. The event, for example, is supported in the next round. And there is special case of such protocol which we call them prefix-simulatable in which the event is determined by the coins of the next round like random termination. And we are going to talk about them in our slide of the main result. So how we analyze the successful ability of this attack? So this is basically done by considering two distributions, attack and win. So attack is the distribution of all the messages of all the verifiers that are induced by this emetic attack that we saw. And win is the distribution of a run and win transcript in which all the verifiers accept. So for our goal, it suffice to show that small events in win maps to small events in attack because if we can show it, then we can consider the event that all verifiers accept. It happens with probability one in win by definition. Then it will also happen with probability very close to one in attack and then before we are done. So how we show the small events map to small events? We can show it by, for example, bounding the statistical distance. And indeed, this was shown by, was done by the Heidner and the Hasset et al. The problem is that it's too rough measure in guarantee similar probability for any event, not just for small one. And these two strong guarantees inherently non-tight for small events. It also has no chain rule, for iterative processes, we need to use wasteful hybrid argument. So a different option is to bound the K version of win attack. It's actually what was done by a chunk and pass, at least in the public coin case where the random continuation is not skewed but some strange conditioning. They show that the K version is indeed small and this year the tight bound, at least in this case. So what is K version and why it's so useful in our case? So just for the formal definition, the K version of two distribution, P and Q is just the expected low ratio when we take the expectation of the left-hand side distribution P. So it's an asymmetric distance measure. And actually it's more subtle than statistical distance. It's really tailored for small events. It gives us a weaker guarantee for other probability events and therefore it has the potential to be tighter on small events. It also has a chain rule which is useful when dealing with iterative processes like in our case. As we mentioned, it's asymmetric which is great when the distribution P is much more simpler than the distribution Q. In our case, wind distribution is much more simpler than attack. The problem is that it's a bit edgy and really small events might have huge effect on the resulting K-Alder version. And indeed, in our case, the K-Alder version of wind and attack might be infinite. And let's see an example for that. Suppose we have a two-run argument and we have an n-instance attack P and star that wins if either no verify our boards in the second round or in this very tiny event that first-run messages of all verifiers are equal. So since the probability of A is small but it's much larger than the probability of B, then effectively the wind distribution is just a transitive condition on the event A and therefore the first-run messages on the wind are almost independent while under attack when we condition on some verify our boards at the second round, we effectively condition on the event B and therefore the first-run messages in attack are the same. And therefore the K-Alder version between the first-run messages of wind and attack is infinite in that case. So it seems that K-Alder version is not the right choice for a distant measure. So maybe we're looking for a different, more robust measure, this star that has the properties that we need. For example, we can bound the distance between wind and attack under this measure it's small and it's also possible to bound using some machinery for iterative processes. And also it will have the property we wanted the property that if it's small then small events in P maps to small events in Q. So maybe K-Alder version seems that it has many useful properties. So maybe we can think of a relaxation of K-Alder version that gives us exactly what we need. And a naive attempt for doing such a thing is by taking the let's say the infimum of the K-Alder version of P prime Q prime where P prime is statistical close to P and Q prime is statistical close to Q. And in particular you can think of Q prime as a Q condition on some event E where the probability of E is high under Q. So the problem is that finding a suitable Q prime requires a very good understanding of Q. For example, it determines such a good event E with high probability over Q. In our case, the Q is the attack distribution is very complicated. So even if we can determine such a good event E that makes the K-Alder version small then bounding its probability under Q is very hard. It's even harder than bounding directly the probability of all very far intercept under Q which is our main goal. So the main problem with this relaxation is it does not exploit the asymmetric nature of K-Alder version. We somehow want to bound only events under the nice distribution P and not under Q. So does it even meaningful to bound the probability of E under P? Okay, somehow we want a measure that only gives us a way to let's bound events under the nice distribution and then everything will work. Okay, so therefore we came up with a different relaxation of K-Alder version which we call the smooth K-Alder version. So the absolute smooth K-Alder version of P and Q to distribution of some universe U is just the infimum of the K-Alder version of F, P, and GQ where the infimum is taken over all randomized functions F, F, and G that satisfies the following properties. First, a technical property that the F and G are either the unlatency or output something outside of the universe say bottom at some place. And second and more importantly, we want that F will be the identity under P with high probability. And this is an asymmetric requirement, the very important and we don't require, this is what important, we don't require the similar requirement over Q. And the key properties of this distance measure first it gives us what we need if we can show that the smooth K-Alder version of P and Q is small for small epsilon, then automatically low events in E maps to low events in Q. And the key property of this relaxation is really tailored for iterative processes. And now let's see how we use it. So suppose we would like to bound the smooth K-Alder version of two distributions P and Q over X1 to Xm for some small epsilon. And let's assume that we have events E1 to Em over Q. We can think of them as the good events that each EI is the good event over the XI part such that if we condition Q on it, then we can bound the K-Alder version of P XI and Q XI. So now let's consider the following experiment, very strange experiment. We are going to take X1 to Xm according to P. And then we are going to associate events or newly coins E1 till to Em till. But each EI till is the coin that is one with probability. What is the probability that EI XU was under Q? We condition on E1 to E-1 and condition on this fixing of X1 to XI-1. So even though that we took the X1 to Xm from P, we started to treat it like we are at the middle of the Q process where this is the fixing of the I-1 coordinates. And we asked the question, what is the probability of this EI? And with this probability we flick the coin EI tilled. So if we can show that under this strange experiment, the probability that all the coins are one is high, then actually we can bound the smooth K-Alder version of P and Q by the sum of all the K-Alder version of P XI and Q XI when we condition the XI on the good events on the E1 to EI. And what's written here is what we call a conditional K-Alder version. It means just that we also condition on X1 to XI-1 that taken from this distribution. And so how we use this nice fact. So in our case, there exists a distribution with prime which is very close to EI. And there are events E1 to EM, very tailored event over attack, such that for every I we really can show that we can bound the K-Alder version of the XI part of wind prime and attack when we condition attack on these good events. And also we show that this strange experiment, all the coins are one and the wind, the probability that all the coins are one and the wind is high. Okay? This is not easy, but at least we can do it. And this immediately yields the smooth K-Alder version of wind and attack is small. And what's important about all the resources that I described, which emphasize the power of smooth K-Alder version that we didn't need to bound the probability of E1 to EM under attack in any step. So we don't know how to do it directly and somehow using smooth K-Alder version, we overcome this difficulty by only bounding the probability of all the coins are one and the wind, which is a much more simpler distribution. So summarizing our main result, we show that the alpha smooth the K-Alder version of wind and attack is smaller than alpha for this choice of alpha, where the delta area is there from the delta simulator belt and the epsilon prime is the probability that we assume that the PN star wins all the verifiers. So this immediately yield, that this epsilon prime must be small since otherwise the wind and attack will be so close under this distance measure. And therefore you will get that the attack is too good. And in the prefix data simulator case that we mentioned previously, we can do optimization and save a factor of M. And in that case, we get a better decrease in rate which is tight and also includes the random terminating arguments which are prefix one over M simulator belt. So for open questions, So first there is still an issue that to close the gap in the not prefix, not prefix simulator argument, there is still a factor of one over M in the exponent. But more importantly, can we come up with other generic transformation for amplifying the service of interactive arguments? So we mentioned the fullyomorphic. We also talked about that random terminated cannot achieve something better than one over M exponent. But then you can find other method maybe under other weaker computational assumptions. And that's it. Thank you everyone for listening to me and goodbye.