 Hi, everyone. I'm Michael. In this video, I'll talk about my work on expected polynomial runtime in cryptography Our main focus zero knowledge proof systems. So a quick recap We have a prover and a verifier and this language this NP language Where the prover has a statement and a witness and the verifier wants to be convinced that x lies in the language And they interact and in the end the verifier outputs its verdict whether it accepts the claim or not and their knowledge says that The verifier cannot learn anything from this interaction and it's usually formulated as follows So it's usually formulated as existential zero knowledge, which says that for every verifier there exists a simulator such that the output of the real protocol of the verifier and The output of the simulator are indistinguishable and the simulator does not get the witness's input a Stronger notion of zero knowledge is universal zero knowledge here The order of the quantifiers are swapped. So the simulator is universal for all adversaries there exists one simulator and this is given the code of the adversary and then it works for any adversary and An even stronger notion of zero knowledge is black box your knowledge here The simulator is universal and it does not even get to see the code of the adversary It just gets access to the adversary as a black box and I've closed over one detail here namely the running times So I use expected poly term simulation here, but strict poly term adversaries and this actually induces an asymmetry and This is this kind of breaks the promise of zero knowledge Which states that everything the verifier Learns it could simulate itself because this verifier cannot run the simulator and also you run into Composability issues because after replacing an adversary by a simulator once You will now have an expected time adversary which you cannot handle anymore because We cannot only handle PPT adversaries here So there are two ways out we could require the simulator to be PPT or we could allow the adversary to be EPT and we'll look at the letter and With this choice another implicit assumption becomes more apparent Namely with PPT adversaries one usually assumes that they are well behaved in any environment So there is this one polynomial bound and it will hold no matter what machines The verifier interacts with for expected poly term verifiers. This is not so clear anymore and well, let's see why we look at this setting and What setting we look at now exactly so when do we want to look at expected time simulation? The main reason is that strict poly term simulation is impossible in the plane model For constant round black box zero knowledge with negligible silence error, which is a result due to Barack and Lindell and they also show that similar impossibilities hold for proof of knowledge and The relaxation of adversaries or the strengthening of adversaries and to designated adversaries Which only need to be efficient in the protocol They are designed to attack as it's very natural in the expected poly setting But it turns out to be somewhat tricky to deal with but in some sense you do have to deal with this you have to decide on a notion and we go with this very natural notion and in the rest of this talk we will mainly focus on expected poly time verifiers and The designated verifier aspect comes naturally We will as our running example use the graph three covering protocol So a quick reminder. This is the classic graph three covering protocol We have a verifier and a prover and the prover knows the graph and the three coloring and it will first randomize this three coloring and then commit to all the colors and Send these to the verifier the verifier will choose a random challenge edge and the prover will then open the colors for these notes the commitments and The verifier will check if these commitments were correctly opened and if the colors are different And if this is the case it will accept Now this is a constant round protocol, but it has a huge soundless error So to drive down the soundless error We would like to use parallel repetition so that it remains constant round, but Then we lose zero knowledge. So we actually use a different protocol We use the protocol by gold like kahan, which is a modification of the graph three coloring proof as follows We choose the challenges as the very first step and then the verifier commits to its challenge choices And the rest is basically the same the verifier will open its challenge now and the prover will check this opening But otherwise the the protocol is unchanged So with this we can now do an endfall parallel repetition of this almost standard graph three coloring protocol With challenges committed beforehand and we get a constant round protocol Which has negligible soundless error and it can be shown that this is black box zero knowledge But the proof that this is true is somewhat tricky and We will see the reason in a minute, but before let's look at an alternative security proof Which even holds for designated verifiers and expected time verifiers and the proof is clean It's simple, but it's wrong and maybe you can spot the mistake already So the proof is actually based on the naive simulator. So that's the reason it's simple It's really the straightforward choice of simulator This simulator will first run the verifier to receive the commitments to the challenges It will then send a garbage commitment to all zeros just so that in the next step It receives openings of the verifiers challenges and if the verifier Fails in this step. Well, the simulator can just abort like the arms prove I would so now we have openings to the committee challenges and The verifier at the simulator will now rewind the verifier back to before it sent the garbage commitment And now it will send pseudo coloring. So a commitment to colors which answer the challenge query correctly But otherwise I just all zeros It will rewind the verifier until it answers again with valid opening to the commitments and If this valid opening happens to to be different from the opening before The simulator will abort because now the binding property was broken and the simulator doesn't know what to do otherwise the simulator can just open its challenge coloring and It's fine because it's a pseudo coloring So now the simulator can run the verify to the end that output whatever the verify outputs so this is the naive simulator and Let's have a look at the security proof So we use game hops we start with the honest game the honest execution and at the very first step We will introduce all the winding So it's not hard to see that introducing the rewinding at most doubles the runtime because we have expected one rewind and note here that we do not compute garbage commit garbage commitments here Because the commitments and everything else is computed honestly here. We basically run the honest prover With all its inputs except fresh randomness In the next step we abort if the second opening of the challenge would be different from the first one this is Not a big change because it can be reduced to the binding property that this happens and The reduction is actually straightforward here In the next step, we will replace the first commitment Used with the garbage commitment the simulator uses again. This is a straightforward reduction to the hiding property of the commitment scheme And in the last step we do the same for the pseudo coloring commitments and here we use that we already know the challenge But otherwise this is again a straightforward reduction. So nothing really happened here But something went wrong. So did you spot the mistake? And the problem is maybe unsurprisingly that we have a runtime explosion So let's have a look at a very simple adversary which will make the runtime explode And this is an example to do figer and the idea is to just run the honest verifier And in the end with tiny probability, that's brute force the proof is commitments and if this verifiers sees a pseudo coloring it will run forever and Otherwise it will output some value D according to some distribution D So when running with the honest proven the verify will be efficient because the tiny probability Is it's not a problem for expected poly time V and it will never see a pseudo coloring However, if we run this verify with the simulator then it might happen that the verifier Breaks the commitment scheme and sees the pseudo coloring and in that case the simulator would run the Verifier to its end which is forever and so the simulator itself would run forever Now there's an obvious solution to this we could try to just truncate the verifiers execution This is not fully black box anymore, but it's it's a clear solution But it doesn't really work Because if this distribution D is not a proximal in strict poly time, then we cannot truncate the verifier to strict poly time So at least the most obvious fix fails But this can be salvaged because custom they'll show that using super polynomial truncation and some more techniques You can actually prove that the simulation works and you can handle expected poly time verifiers There's also another take on this we could say that well these designated adversaries are just not worth it And we want that the adversaries is better behaved so we could say that the adversary should be expected poly in any interaction and Maybe surprisingly this also is not good enough because cousin Adele show that Basically, did you the rewinding which happens in the simulation? But never in the real protocol you can have a verifier which will make the simulation runtime explode But no real interaction even with arbitrary environments Again, this can be salvaged and this is what goldreich does He says that the adversary should be expected poly time with respect to any reset attack So basically even if you try to make the verifier run very very long You will fail because it is so well behaved it just will never run for too long and this This approach basically says that we do not want to deal with designated adversaries. So this is not the path we take our path is on a very high level simulators that truncation idea and before we explain our take on this will simplify the situation more so that we can really see the the core problems here and Consider this simplified Situation where we just have an algorithm a Which computes the identity by sampling random string and in any case outputting x and This is clearly efficient. So there's nothing problematic here But this variation here be which loops forever if this random string happens to be zero This is clearly not efficient Because it's expected time is infinite for example But this really makes sense to say that this is not efficient because even if we get access to these Algorithms as black boxes and we're told how long they run whenever we query them we could not distinguish them at least not with a poly number of tries and This this is the core problem in distinguishability does not preserve efficiency And we actually saw that not even statistically in distinguishability preserves efficiency and This leads us to the question if an algorithm is indistinguishable from efficient. Isn't it efficient? and this is the idea which Inspired our solution and in a very abstract view we can look at it as follows We can say that a runtime class so set of Runtimes which are by definition efficient runtimes. It's distinguishing closed if Any runtime which cannot be efficiently distinguished from efficient must also be efficient or Informal us for any runtime you if there exists a runtime s which lies in T and UNS are t time indistinguishable So no efficient algorithm can distinguish UNS then you must also lie in T. So you must also be efficient Well, this we now turn to our relaxation of expected poly time, which is called computationally expected poly time and First we find what we mean by expected poly time more precisely and now setting things only depend on the security parameter and a runtime t is expected poly if the expectation is bounded by a polynomial in the security parameter and Computationally expected poly time is now defined as this runtime t is sep If there exists an expected poly time s and T and s are indistinguishable So I have to remark two things here. We actually use PPT indistinguishability instead of sept indistinguishability But it turns out that these indistinguishability notions are equivalent anyway and What we use here is not one shot indistinguishability where you just get one sample, but we use repeated samples, so Implicitly we mean here T and s are indistinguishable given repeated samples because otherwise, it doesn't really make sense in the setting of Algorithms, which you want to run and which you can run many times because you have access to this algorithm And you just want to make sure that it's efficient So this is the natural choice to have repeated samples here and This definition might not look so nice But we do have a characterization which shows that it's actually a rather neat definition so this characterization says that T is sep if while it satisfies the original definition So there is an expected time s and T and s are computationally indistinguishable under repeated samples The same holds for statistical indistinguishability again under repeated samples, which is denoted as PPT query here And we have a third characterization which is somewhat different Namely there exists a set of good events which has overwhelming probability And conditioned on this set of good events the expectation of T is bounded by a polynomial This good event characterization is very useful for Unconditional things For example the introduction of Rewinding to see that it does not break sep Whereas the first point is very useful for indistinguishability hops and actually we have a lemma which basically just states restates the usual direct reduction in this designated verifier setting with sep and provides efficiency from indistinguishability in some sense and in this standard reduction We consider two oracles and is extinguished at D And I'll suppose the distinguisher can distinguish o0 and o1 with advantage at least one over poli And also assume that the distinguisher is sep when running with o0 So we make no assumption about o1 and when we say sep we only count the steps of the distinguisher The oracles might not be efficient Then this lemma says There exists a strict PPT distinguisher a Whose advantage is at least one over four poli And why does this imply efficiency from indistinguishability? Well D could basically use its runtime It could measure its own runtime and use this as a distinguishing characteristic So we see that if D is sep then we could replace it with a PPT adversary strict PPT adversary and still have non-negligible advantage And actually the proof reflects this So the proof idea is to just truncate the distinguish at D So that when running with o0 a and d have statistically close output. So it's at most one over four poli far Now the problem is that the time of probability Could be very different when a is running with o1 or o0 But if it's too different if it's at least one over four poli far Then again we get this we get a distinguisher by just using the timeout event as the distinguishing characteristic And otherwise it's easy to see that a still has one over four poli advantage And now a is strict PPT by construction So this is the core solid one can use to replace direct reductions And we will now apply this to the graph recurring protocol But first let's clarify what we mean by zero knowledge in this designated adversary setting And here is zero knowledge for proof system PV means we have a universal simulator And we define these two oracles o0 and o1 where o0 is the honest interaction and o1 is the simulation and we require That o0 is indistinguishable from o1 Which just means that well the simulators output and the real output are indistinguishable And we also require that o1 is efficient relative to o0 Which just means that well if the verifier is efficient, so if if it's an efficient attack Then the simulation must also be efficient Which is the minimum requirement we need for for efficiency of simulation And if the attack is inefficient, we don't really care about whether simulation is efficient So this relative efficiency is something which is very natural or which comes naturally in this designated adversary setting And we'll see that Actually, we do satisfy this By going through the proof again Now we will have a look at the runtime here in this column And We start with the honest protocol as usual It is sept or apt by assumption because we want an efficient adversary Now in the first game we introduced the rewinding It can be seen that this preserves apt and sept and for sept we used the negligible event characterization In the second step we fixate the openings now here really nothing of interest happens to the runtime So this is a very straightforward reduction In the third step we use the hiding property to replace commitments by garbage commitments And we know that apt is not preserved but sept is preserved and to the and to see this we can just use the standard reduction And similarly in the next step where we replace the colorings by pseudo colorings apt is not preserved but sept is preserved And actually what we see here in this chain of reasoning We started with an adversary which was efficient in the real protocol And we ended with a simulation which was also efficient So we actually showed that this works for designated sept adversaries Now with the standard reduction we have provided the first and probably the most important tool in cryptography But there's also another very important tool which is used in almost all security reductions namely the hybrid lemma And we can also transport the hybrid lemma into this designated adversary sept setting And we have formulated it abstractly as shown here So we write rep oracle to denote repeated access to to an oracle Which just can be seen as a generalization of repeated sampling And the hybrid lemma then says that if two oracles are indistinguishable and o1 is efficient relative to o0 Then the repeated oracle o0 and the repeated oracle o1 are still computationally indistinguishable And the repeated oracle o1 is efficient relative to the repeated oracle o0 And it turns out that the hybrid lemma is actually not that easy to prove and we will give a very very high level sketch of the obstacles and the solution And the obstacle basically is or the main obstacle is the super constant invocation of relative efficiency Is something we cannot do Because relative efficiency comes with a polynomial slack in In the runtime so runtime might increase by a polynomial factor for example And we can certainly not allow this for more than a constant number of times Another problem is that the reduction can actually not see the time spent in the challenge oracle Which makes reasoning more complex And there's a very neat solution due to hofein's urmula quadre who looked at sort of designated adversary in the uc setting for poly-time adversaries And they randomize the order of the oracles in the hybrid argument And doing this now all the oracles will have the same runtime distribution So this kind of solves the super constant invocation because one can now argue with a constant number of invocations of this relative efficiency And what's very easy to see is one can now watch The runtime of non-challenge oracles to actually approximate the runtime of the challenge oracle And with this and quite a lot of technical reasoning One can actually derive this hyperdilemma So to summarize We've presented set a small relaxation of ept We stated some basic tools Which show that one can actually work with set even though It's kind of different from usual notions of efficiency We showed that one can handle designated set adversaries Incept so now we have symmetric runtime classes for the adversary and the simulator We make no non-essential restrictions on vstar because we consider designated adversaries And actually now the naive simulator for graph 3 coloring works even in the godlike kahan protocol, which is quite nice We can also see that this proof strategy generalizes and we do this explicitly in our work And it's the last thing we should say that we do pay a price because Arguing efficiency can become quite hairy if you cannot rely on The standard reduction of the hybrid lemma and actually when proving the hybrid lemma one one has to go through quite a bit of technical details To see that things work out And this finishes the talk. Thank you for watching and you can see the full paper on it