 Thank you. So I'd like to thank again David Poinscheval and Thomas Johansson for inviting me to speak here Also like to thank Nigel smart for providing me with three bodyguards, which helped me feel very safe during the conference So thanks Dave is sitting over here Mark as you can't see it in ceiling and John is hidden behind the curtains. So What I'm talking about today is joint work. I don't have Is that Hope this doesn't go off any few minutes. So this is joint work with Sanjeev Chatterjee, Neil Koblitz and Pala Sarkar just to set the framework. This is a talk about provable security So the goal of provable security as most of you know is to prove that a protocol p is secure with respect to some Computational problem or primitive s and this process usually has three steps to it There is the requirement of a security definition which accurately captures the goals of the Adversary and its capabilities There is a statement about the assumptions of the hard problem or the primitive and there's a reduction of security proof Which uses a high path hypothetical adversary of the protocol. I call this adversary a to construct a Algorithm that breaks the hard problem P This is going to go off every few seconds. It looks like I don't know why but anyway So having done such a proof the question one should always ask is what practical assurances of security does the proof provide when you deploy the protocol in practice This is both a very important question and also a very difficult question and today I want to talk about three difficulties with interpreting proofs in practice. I'll focus on tightness of proofs The multi-user versus the single-user setting and the non-uniform complexity model and for concreteness. I'll focus on Mac schemes Okay, this talk is about what I what's called practice oriented provable security understanding what practical problems or practical protocols Proofs for practical problems can meet in practice. I'm not talking about what people call the foundations of cryptography Right people study interesting theoretical Instructions that achieve certain cryptographic goals and these constructions may or may not be useful for the present That's not the problem. So I'm not talking about what people typically call the foundations of cryptography My talk is based on papers available on our website another look that's CA these papers are viewed by many as being highly controversial I'd like to share with you two quotes. I received from Recent anonymous referee reports one of our papers So one referee said these papers have elicited a wide variety of reactions from the cryptographic community Ranging from visceral hatred to adulation Another referee comment Was competing on our critique of this emerging field of leakage resilience Which the referee sees as being a field in its infancy and the referee says what one must wonder lies behind his desire to commit infanticide Yep, so just for the record our goal in writing this papers isn't to receive adulation And it's certainly not to commit infanticide So there's been rumors on the blogs on Facebook and Twitter to elect a spell right now by disclaiming once it's for all that No babies were killed in preparation for this talk Actually should be a bit more precise So I arrived in Heathrow Airport Saturday evening spent the night at the hotel preparing my slides and all day Sunday and for dinner Saturday. I did have veal So I guess in principle a baby bull was killed in preparation for this talk I feel terrible about that and I really should try harder to be a vegetarian But no human babies were killed in preparation for this talk And in fact, I've never ever killed any human babies And I've checked with my co-authors and they agree that it's just it's wrong to kill human babies. I don't ever do it Okay I guess be totally clear It's also wrong to kill human children and human animals Okay, and I was happy to see this proven last night at the rum session by Alex Dent Or a simple corollary of his talk was that it's wrong to kill human babies human children and human animals So it's like to see that proven formally last night at the rum session But if you know this carefully there was a little gap in his proof. It didn't really account for teenagers Okay So if you happen to have a human teenager between the age of 12 and 25 that really annoys you you think carefully about it And you really want to kill that teenager? That's probably all right But other than that killing babies children or adults of a humankind is certainly wrong Okay, so the first part of my talk will be analyzing tightness Okay, this is an area that's well known to everyone who works in protocols But and a lot of work is done in trying to address it But for the most part I find the issue is sort of swept under the rug Let me tell you what I mean by tightness so again We have we want to prove a protocol secure with respect to a primitive or problem s So we assume there's an algorithm a which breaks a protocol I'll assume that a t epsilon breaks a protocol which means in running time t the algorithm is successful Probably at least epsilon so the reduction then is now with them are which uses a as a subroutine To and can solves a problem s So we'll suppose the reduction algorithm art solves a problem in time t primed With probably a success at least epsilon primed So what this theorem then tells you is that if the primitive is t prime epsilon prime secure Then the protocol is t epsilon secure. That's what the security security groups gives you the proof is set to be tight If t primed in t or approximately equal to each other as our epsilon prime than epsilon and Type proof is desirable because then the assumptions you make about the well-studied Primitive or a hard problem are directly translated into the assurances for the protocol the proof gives you a very nice Assurance from your assumptions about the hard problem The proof is non tight on the other hand if t is a lot less than t primed or if epsilon is a lot greater than epsilon primed and Tight non tight proof the list is arduous because then you don't get the same assurance From for the protocol as you assume for the hard problem so the tightness gap I'll call the ratio of the time success probability for the for the For the hard problem and for the protocol you get from the proof So as an example where we mostly know this classic now proof by blurry and Rockaway for Frohler main hash RSA The proof is non tight the tightness gap being equal to the number of queries That the adversary makes to the random oracle this in general be quite large So for concreteness, let's suppose n is an RCA modulus of length 1024 bits the assumption we can make about the hard problem is that the RCA problem can't be Solved for these parameters of t prime and epsilon prime this we get from the number feel safe Okay, so suppose that's a forger t epsilon forger of the RCA folder man folder main hash scheme Makes it most of two to the 60 hash queries Then the blurry or a way proof uses this algorithm to t epsilon over to 60 solve the RCA problem Okay, so the conclusion you get from this proof is that if that RCA folder man hashes t epsilon secure For t divided by epsilon at most two to the power 20 again under the assumption stated here So there's a tightness gap in this proof of two to the power 60 The result being if you use a thousand bit RCA modulus the assurance from the proof is really a lot less than you would like in practice A million is not a very large security assurance Well, of course if you want the assurance of two to the 80 you can increase your RC modulus In this case increase n to be a 4,000 bit RC modulus And then you get the light desired level of assurance from the proof Except if now you know you now incur performance hit because the modulus is a lot larger Okay, but but in practice no one takes his recommendation seriously So I don't know of a single standardized protocol Supported by non tight proof where the parameters were increased because the proof was non tight I also don't know of a single implementation of any protocol where the parameters were increased Because the supporting proof for that protocol was non tight Also, if your primitive happens to be say AES, which you have in hardware and you have a non tight proof that assumes AES as a pseudo random function You can't increase the parameters of ES in any easy way to account for the non tightness and proofs If your protocol is a pairing based protocol and you want to use it at the 128 bit level for which BN curves are ideally suited And your proof is a very non tight proof You can't increase the parameters of the BN curve. You need a curve of the right embedding degree These things may not even exist So you have a really big penalty performance If you really want to increase parameters for a pairing based protocol to account for non tightness in a security proof So in the literature most proofs are non tight and there's no common way of dealing with it with the non tightness Here's one example. I found from a survey paper by Boyan on ID based encryption So Boyan compares the tightness of the reductions for the Bonae Franklin, Sakai Kasahara and the Bonae Boyan ID based encryption schemes So he notices that the reductions for BB1 is significantly tighter than the reduction for Bonae Franklin Which in turn is significantly tighter than the reduction for Sakai Kasahara But in fact all three reductions are highly non tight The reductions having tightness gaps equal to the being linear quadratic and cubic in the number of random all queries So really SK has a highly highly highly non tight reduction BF is highly highly non tight while BB1 is highly non tight Okay, nonetheless Boyan's recommendations are that SK should generally be avoided as a rule of thumb when a Franklin is safe to use and BB1 appears to be the smartest choice in part due to the fairly efficient security reduction of the latter And this is logical advice as long as you actually take the advice So if you look at a recent idea of standard co-authored by Boyan for BB1 and BF1 The parameters chosen for the standard are chosen without regards to the non tightness of the proof So they're using the proof as assurance as if the proof were tight when in fact it's highly non tight Okay, so there is no uniform way of dealing with non tight proofs So suppose you have your own protocol you have a proof for it. It's non tight Yeah, does tightness really matter is the question and there are many ways of thinking about this So on the one hand you could be optimistic and assume that in the future Someone will find a tighter reduction for your protocol and in the case of ours a full-domain hash Koran shortly after found a much tighter proof for ours a full-domain hash with the tightness gap was a number of signature queries Not the number of random oracle queries That's a possibility So perhaps a tight reduction can't be found for the protocol but if you modify the protocol every slightly you do get a tight reduction and Cats and Wang have a nice little modification of ours a full-domain hash If you make the small modification you get a tight reduction And so you get either then make that modification in practice or use the Modification as a rationale that the original ours a full-domain hash is secure and doesn't need the modification The modification buys you tightness doesn't really seem to buy you any security in practice So we can maybe ignore this non tightness in the security proof Maybe you can get a tighter reduction by modifying or relaxing the hard problem The first stock after lunch will do this for us a full-domain hash Maybe the notion of security is too tight. You don't need adaptive chosen. Whatever make the security notion tighter Looser, and then you get a tighter reduction Maybe the protocol is secure Even though a tight reduction just doesn't exist. It's just the limitations of what reductions can achieve Okay, and another optimistic conclusion is that a tight reduction is better than nothing at all If you have no proof at all, some would say you have no assurance whatsoever that your protocol is secure So let's just accept a highly non-tight reduction as being some assurance So these are the optimistic interpretations of non-tight proofs generally stated in papers Almost no one talks about the nightmare scenario Namely, perhaps a protocol is in fact insecure, but an attack is not yet been discovered Okay, and certainly as cryptographers the nature of your work is necessarily conservative and hopefully somewhat paranoid So you design your nice protocol you try hard to prove it secure You work very hard at that your proof is non tight. You try hard to get a tighter proof and you fail So I think you can serve in paranoid nature must lead you conclude that I can't make this proof tighter Because there exists an attack which I haven't found this yet That to me is the most logical conclusion to be made for a non-tight proof if you are conservative and paranoid cryptographer So to give you an example of this Talk about Mac schemes So HK here is a family of Mac functions indexed by an orbit key K So the usual scary notion of a Mac schemes involves a secret key K An adversary B who's given access to a Mac in Oracle and the attackers goal is to compute a message tag pair That's valid where the message wasn't previously queried to the Oracle. So I'll call this problem the problem of breaking Mac one Okay, now the definition involves one user or a pair of users and an attacker But in the real world we always deploy Mac schemes in the multi-user setting where there are many users potentially millions of users So really we want to study the security of Mac schemes in the multi-user setting So I have a definition for that. I'm calling it Mac star So we're using now the same Mac scheme in the multi-user setting There are end users or end pairs of users each having a secret key case of I The adversary is access to or making oracles for each of these users and the attackers task is to compute The message tag pair for some specified user of the end users That's the Mac forgery. I'll call this problem the task of breaking Mac star Okay, so Mac Mac schemes secure with respect to the first definition have been well studied So I want the assurance that the same Mac scheme when deployed in the multi-user setting is also secure I want a security proof and here's a nice elegant proof for the security of Mac star It's a reduction from breaking Mac one to breaking Mac star So I assume I have an algorithm a that T epsilon breaks Mac star Mac in the multi-user setting and I want to use a to Break the ordinary Mac problem So I'm given an oracle for a Macking function and my goal is to use the algorithm a to produce a forgery for this macking oracle the proof is The kind used in very many proofs So it's a very easy short proof. I'll start by selecting a random index J between one and N For each other index I I'll select my own secret key to represent user eyes secret key While users J secret key is assigned to be the unknown secret key for the given macking oracle So I run the adversary a it makes queries to these as an oracles Of course, I can use the keys I chose to answer a's Mac queries to users other than the jth user and I'll use the given oracle to me HK to answer a's oracle queries to user J Okay, at the end of this experiment a eventually outputs a message tag forgery with probably lease epsilon And hopefully this is for the jth user, which is with a further probability one over N And if that's the case, I've used a to construct a forgery for the oracle that was given to me Okay, so my sex success probability here is epsilon divided by N and because I'm hoping that the Attacker it selects the jth user on which to produce the forgery So to summarize what this argument shows is that if Mac one is T prime epsilon prime secure Then max star is T primed and epsilon prime secure So there's a tightness gap in this proof equal to N Okay, does this dice dice it's got matter? Yes, it does So here's a simple attack on max star due to I think you'll I really meet him for the first time needed key collision attacks So assume here that the key length is less than the tag length for simplicity The attack is very simple select an arbitrary message M and obtain tags for that single message M from the end users Now select an arbitrary subset w of keys of size w and for each key L in the size Compute the Mac tag of that message with your choice of key and compare it to the tags you obtained from the end users If you have a match you can clue that L equals Ki So you found the ith users key and use Ki to forge a message tag pair for I Okay, so the analysis is easy, but for specific examples if you use see Mac Which is a provably secure and standardized Mac scheme with 80 bit keys in 80 the tags Assume there are a million users or pairs of users Choose they'll be to be two to the sixty the attack takes two to the sixty steps And they're probably at least a half it will recover one of the million users secret keys And in that sense break the Mac scheme in the multi-user setting There's also a very simple time-memory trade-off where with an offline computation of two to the sixty Mac computations the online part of the attack takes only two to the forty steps Okay, so the speed up in this attack Over the generic attack of finding keys for a Mac scheme is by factor of n Which is precisely the tightness gap in the security reduction Okay, so this is really the nightmare scenario where the tightness gap in the reduction Translated exactly into a practical attack Okay, how can this be fixed? Well, you might consider using what I call fixed Mac or f Mac a very simple idea Before macking a message M The user always prefixes the message with a string F. This is a string. That's fixed non-secret and unique For every pair of users for that session That's fixed Mac f Mac you can now check that f Mac star That's f Mac in the multi-user setting resist the previous attack There's also a simple tight reduction from Mac star to f Mac star. So f Mac star certainly doesn't reduce the security of Mac star Okay, and you can prove f Mac secure f Mac star secure again Under the Mac one assumption But like with my proof for Mac star the proof for f Mac star is also non-tight So the tightness gap is still m Okay, so we have a scheme a modified version of Mac star which is Apparently more secure than Mac star. They resist the attack. I showed you in the previous slide But the security reduction still doesn't take away the tightness gap of m Okay, so but in practice, of course we would expect f Mac star to be Tightly related to the security of Mac one in practice and that's roughly because if you look at the macking Functions since each user has a different prefects for each message I can imagine that each user has an independent family of Mac functions And so the function shouldn't have to interfere with each other Okay, so interestingly enough from provably secure point of view There is no difference between Mac star and f Mac star Get in practice. There is a big difference between the practical security of Mac star and f Mac star Okay, so this this this element of Mac star appears in several other protocols I found in the literature this tightness gap as a result of the proof for Mac star So I found this in the cats lindel aggregate Mac scheme There's a history-free aggregate Mac scheme due to eichmere and co-authors and also in the canady crotch Authentication protocol from 2001 and these proofs are sometimes very complicated The small element of these proofs includes precisely the proof I showed you for Mac star so these proofs inherit that tightness gap and also the attack So this is the this this gap appears in several published proofs So I think it to conclude non tight proofs can give one a false sense of security The literature has literally thousands of papers with non tight proofs the examples I gave the non tightness is a very simple factor namely the number of users many proofs have five to ten parameters which have to be analyzed to To recover the non tightness in the proof and it really isn't clear at all whether those pram The non tightness is important or not So a very legitimate question to ask is whether security proofs with non tight reductions have any practical value Okay, so my second brief point is about the multi-user setting which has been sometimes ignored in the literature There's been quite a bit of work though on Analyzing protocols in the multi-user setting Perhaps the first was the ballory Rockway work on key establishment You can have a whole number of users talking to each other There's been work done by think first by ballory bouldy reva and Macaulay on public key encryption in the multi-user setting and some work by Signature schemes in the multi-user setting Nonetheless, I think my previous topic has argued effectively that the security definition for Mac schemes in the single-user setting is Inadequate for the multi-user setting One could argue that the Mac scheme definition is meant to be for security of a primitive So you might treat Mac as a primitive not really ready to be deployed in practice Yeah, but I really think that when you define a scheme a maximum secure It really should be sufficient for the very basic application of a Mac scheme namely Authenticating messages, so I would argue that the basic notion of Mac scheme should be for a protocol not for a primitive And in that sense the Mac one definition is deficient Okay, you can similarly argue that the classic goldwasser mccally revest definition for security of signature schemes is also deficient And this is a well-understood and accepted definition You have a user Alice who has a signing key and you have a forger who can access Alice's assigning Oracle And the goal of the attacker is existential forgery against adaptive chosen message attack But then when you actually use a signature scheme in practice, you're using it at the multi-user setting Okay, we have many users with many public keys You have a CA you have all kinds of things and it isn't at all intuitive as to what security notions You want from the signature scheme in the multi-user setting and that's because signature schemes digital signature schemes are really quite different from handwritten signature schemes So there's no a priori intuition as to what requirements you would desire the signature scheme to have in the multi-user setting So I think a lot more work needs to be done on Understanding what the correct definition is for signature schemes in the multi-user setting. I think it's a very fruitful and useful question to think about and Sometimes people have to find things in the multi-user setting, but the definitions have been inadequate So for instance is the Bonaise gentry Lynn shotgun definition for aggregate signature schemes Which is naturally in the multi-user setting, but the definition is deficient Because the attacker is not allowed to adaptively select its target user. It's given a target user And asked to attack it While in practice the attacker might while doing his computations choose the attacker to attack On the fly If you add this element to the definition then your proofs lose tightness by the number of users And whether that catnip gap is important or not. I don't know. I have no time to look at that Okay, so I've also found many schemes which were proven secure in the single user setting and Sometimes standardized but if you use these schemes with that modification in the multi-user setting as you really well Then the similar attacks do apply as the one on max star So for example the Rockway Shrimp than deterministic authenticated encryption scheme the OCB Authenticated encryption scheme the EME disencryption scheme A nice paper by Greg Zoverutia from last month show that many standardized hybrid encryption schemes fall to the attack And that's because the dem part of the schemes are allowed to be deterministic Because a scary notion for them is as weaker than if you were using encryption as you normally would And that allows the attacks I mentioned before to be launched on these standardized encryption schemes And also quatchucks extract then expand key derivation scheme from crypto 2010 also falls to the same attack I described on max star and that's because in practice you would deploy these schemes in the multi-user setting Whereas the analysis improves our role in the single user setting okay, so the question I really think I'm posing here is should we be suspicious of security definitions and theorems that are in the single user setting when in fact these protocols are always deployed in the multi-user setting okay, my third point is about the non-uniform complexity model in cryptography and The issue of non-uniformity versus uniform is really just a semantic thing The real issue is whether you can use non constructive arguments in Practice-oriented provable security theorems and hope that the theorems are meaningful in practice That's really the point of the of this of this topic non-uniforming uniform is just semantics I've chosen this language because people seem more comfortable with it, but it's really not the main point of this topic So I'll focus on H Mac and we're precisely H Mac when MD5 is the underlying hash function All right, so I'll let F be the MD5 compression function H is the MD5 iterated hash function with IV that denoted here So then the end Mac scheme proposed by Bellary, Kennedy and Krochuk is it has two keys and it max a message by first hashing the message with the second key serving as the IV patting the resulting hash value to get a full message block and Then applying the compression function to the padded block with the first secret key. That's N Mac Which was not desirable in practice because it has two keys and also the keys appear as the IV and So N Mac is modified to H Mac Which I want to describe because the main security argument for H Mac is really the one for N Mac To go from N Mac to H Mac securities is rather simple. So I'll focus in this talk on N Mac as defined here Okay, so the Bellary, Kennedy, Krochuk proof of 96 It was a very elementary elegant proof It had two assumptions that the compression function is a secure Mac scheme and that H is a collision resistant hash function The conclusion being that N Mac is a secure Mac scheme However, of course Professor Wang's collisions on Nv5 and shall one and 2005 meant that the proof was useless as a security guarantee for H Mac with Nv5 or shall one and To restore confidence then in the security of H Mac in practice. Bellary proposed a new proof in 2006 for N Mac As a pseudo random function and the proof had the nice property that it only assumed that the compression function is a secure Suter random function. There's no longer any requirement for the hash function to be collision resistant Here's Bellary's theorem from 2006 Concisely stated if the compression function is a secure PRF then the Mac scheme is also a secure PRF However Bellary's proof is in the non-uniform complexity model So this is a model of complexity where you can imagine a whole series of bullion circuits One for each input size and one is only concerned with the existence of such circuits Not whether they can be officially constructed or not Equivalently you can view Non-uniform algorithm is being an ordinary Turing machine a program Which also has a set of advice strings one string for each input size and These strings only have to exist. We're not concerned with how these strings are actually found So they might well be unconstructible by which I mean no one knows how to construct them efficiently That's what a non-uniform algorithm is so security proofs in the non-uniform model have sometimes been Set to be very desired Because their conclusions are stronger than in the uniform model So find a quote from an early paper by Shafi Goldwasser who says the most meaningful proofs of security are Necessarily those proved with respect to the most powerful adversary to this end We should let the polynomial time adversary be not only probabilistic But also non-uniform Okay, so of course it's desirable to prove theorems with the conclusion here Is as strong as possible? So namely security even against non-uniform adversaries The problem though is that when your conclusion is in a non-uniform model So is your hypothesis and When your hypothesis is in a non-uniform model on the one hand, it might be very hard to analyze and Secondly, it actually might be a lot easier to break the primitive in a non-uniform model than it is in the uniform model so in fact generally The theorems in the non-uniform model are less desirable because as I said it's difficult to assess the hypothesis in the non-uniform model and Typically the hypotheses are a lot stronger in the non-uniform model than they would be in the uniform model And of course we saw really nice examples of this at the rum session talk by Bernstein and Longa last night Okay, so back to the PRF assumption in Bellary's theorem The usual assumption is that a compression function is t epsilon q secure If adversaries with running time at most t making it most q oracle queries and having have advantage of most epsilon Up deciding whether an oracle O given to it is either a purely random oracle or one of those functions with a secret hidden key This is a standard notion for PRF security Okay for MD5 for example the fastest attack that's known on breaking PRFness with respect to this definition is exhaustive key search So given a few message function pairs All you can really do is pick a key and check to see whether the message Function pairs given to you agree for your key and so you're running time as as t and your advantage is t over 2 to the 128 So you win if you by chance pick the right key when doing exhaustive key search If you don't you just guess whether the oracle is random or a actual function And you can see then that the advantage of the attacker is t over 2 to the power 128 So the time success probably advantage for exhaustive key search of MD5 is 2 to the power 128 Okay, so when evaluating The conditions under which Bellarie's theorem Applies in practice Bellarie assumes that exhaustive key search the one I described here is in fact the fastest generic attack for breaking PRFness of F But ever in fact there are certainly more effective and faster generic algorithms in the non-uniform model So let me tell you one So assume that the compression function has good randomness properties in the sense I'll make more clear in the next few lines. So for a compression function value X I'll let you of X be a function which is which outputs a fixed bit of X So maybe the 19th bit or the X or of the first 16 bits anything you want it out. It's a fixed bit So for each message M in the message space. I'll let Probe of M to note the probability that this bit is one the probability being assessed overall Secret keys came and I'll let M star now be a message for which this probability is maximum Okay, so I claim that the probability M star is at least half plus one over two to the 64 And a simple argument for that is to fix the message Consider this function which produces a bit as the key varies as the finding a random walk in the forward direction If the bit is a one and in the backwards direction if the bit is a zero Okay, we know the standard deviation for a random walk from the starting point is square root in the number of steps So you expect to be a lot of random walks which end up two to the 64 steps away from the starting point Either on the left or the right you expect there to be one on the right pick such an M star And its probability is at least half plus one over two to the 64 So such an M star certainly would exist for any naturally constructed family of compression functions Now you have the algorithm for breaking PRF this of F query M star to the oracle if The oracle response has a bit one and you guess that the oracle must be a compression function Otherwise you guess that the oracle is random The running time of this attack is one You make only one query and its advantage is at least one over two to the 64 so the time success probability ratio for this non-uniform attack on PRF this is two to the 64 Versus two to the 128 for exhaustive key search Okay, and this is a massive difference when you're concerned with practice-oriented provable security It's really a very massive difference All right, so let's interpret blurry's proof in practice. Let's suppose messages are a million blocks in length for concreteness So under the assumption that the fastest attack known on breaking PRF this of F is exhaustive key search Blurry argues that its proof justifies and Mac MD5 security up to two of the 44 queries and Similarly two to the 60 queries form and Mac which are one But in fact as I noticed there are faster attacks that likely exist on Sutter randomness in the non-uniform model and if you take these attacks That which you must because blurry's proof is highly non-constructible because it uses this idea of coin fixing to Reduce the running time of the attacker in the proof thereby drastically reducing the tightness gap in the proof So if you take the adversary of PRF as I described earlier Then in fact we see that Blurry's proofs as nothing about and Mac MD5 security for greater than two to the 22 queries As opposed to the claimed two to the 44 queries Similarly Blurry's proofs as nothing about and Mac shall one security if the number of queries made by the attacker is two to the Thirty or more compared to the claimed two to the 60 And again, this is a massive difference from the point of view of practice oriented provable security Okay, so you might ask the question. I was ain't H Mac MD5 actually provably secure And so in our paper, we do give an improved a tighter proof than the one Blurry gave in the process solving an open problem They claimed it was interesting So we get a tighter proof and our proof justifies and Mac MD5 security up to two to the 54 queries And then Mac shall one security up to two to the 70 queries And this is essentially optimal in light of known birthday attacks on on these on these Mac schemes so We've almost proven H Mac MD5 secure However, our proof has a large tightness gap namely one equal to nine n squared and Some n can be fairly large in practice two to the twenty two to the thirty This is a very large tightness gap between the security of the pseudo random function and the pseudo randomness of the of the of the Mac scheme Also, our proof is in the single user setting which is deficient And even though pseudo randomness of the compression function is plausible and well-accepted definition. It's still a very strong hypothesis in light of collision-finding attacks on MD5 and shall one So our opinion is that the value of our proof as a source of assurance about the real world security of H Mac with MD5 or shall one is questionable at best a little post-cript Burnstein observed in 2005 that in Mac and H Mac and really straightforward security proofs If you're willing to assume that f is a pseudo random function and that the hash function is almost universal I want to find that notion here, but if you believe that H Hash functions are almost universal hash functions, then most of the discussion is completely moved Okay, so this is a nice question to think about our MD5 and shall one almost universal hash functions So I found this questionable use of the non-uniform model in several of the papers that were all used again to obtain tighter security proofs by going from the uniform model to the non-uniform model So that I found one in the multi property preserving hash domain extension paper of Bollary and Riston part There's a sandwich hash Mac scheme of usuda The paper in Asia equipped by usuda on boosting Burkle-Dan guard hashing for max and a leakage resilient stream Typhus from pseudo random bit generators all these papers Switched to the non-uniform model to get tighter reductions in their proofs Except the hypotheses now are also in the non-uniform model So therefore they're much stronger. It isn't clear at all what's been gained by moving to the non-uniform model if anything at all So question I want to leave you with is should unconstructable security theorem proofs in the non-uniform model be totally rejected Okay, so I would like to make some concluding remarks So what's the significance of our work to cryptography? So I think if you're a theoretician right who works in the foundations of cryptography and you you work is having maybe long-term applications Which you don't care about applications in the near in the near near term our results are totally relevant to you Right because typically the non-tightness in proofs the tightness gaps arise from factors that are polynomial in the security parameters So asymptotically they mean nothing at all The number of users in the scheme is typically polynomial in security parameters So whether you're in a single or multi-user setting shouldn't really make a big difference And of course a non-uniform complexity model is a perfectly valid model and complexity theory So if you work in foundations, and you don't care about practice our results should mean nothing to you If you're a practitioner Who uses security proofs as a tool as one tool to assess whether your scheme is secure or not? but you rely more heavily in an extensive crypt analysis and sound engineering principles And you really shouldn't be alarmed by any of our observations at best And I hope you at least treat our work as light entertainment and that should be it Okay, however if you happen to be a photographer who believes that a security proof is essential and perhaps the only way to gain Confidence in the practical security of a protocol, then you really should be much more concerned by our observations In fact, I claim you should be very skeptical of non-tight proofs proofs in the single-user setting and proofs in the non-uniform complexity model and Perhaps even totally reject these proofs as just mere heuristic arguments for the security of your protocol Okay, so I think in conclusion there's a lot of interesting useful relevant work to be done in Understanding what security proofs really give you in practice and proposing this interesting field of cops Which I think will be a very fruitful field as it has been for us You know some of the questions that Arrows from the lecture today is I'll repeat them is a non-tight proof of any value in practice Should one be suspicious of security definitions that are in the single-user setting Should unconstructable security proofs in non-uniform model be rejected completely Our H-MAC MD5 and H-MAC Shell 1 Proved be secure in any reasonable sense and these questions I think are all more relevant to practice that concerns a lot of people have about the random oracle assumption in security proofs Okay, so Experience we had with our H-MAC paper. We submitted to e-print in February where we claimed that Bellari's proof Because it used a non-constructible argument was flawed. We explained by flaw We meant that it resulted in a proof where the hypotheses couldn't be tested And I was very surprised by the reaction from people we got whether by email and blogs Their main concern was that the proof is mathematically correct That we were wrong to imply that it's wrong by saying it's flawed the main point was a proof is mathematically correct so, I mean the main goal of practice oriented proof of security really should be about obtaining concrete security assurances not just mathematical formalism and correctness It really bothered me that no one seemed concerned about the fact that Bellari's proof in fact Offered a lot more assurance in practice for H-MAC than in that claim They were more concerned with the fact that the proof is mathematically correct and that all that seems to matter So I hope to leave you with a stop that it's more important to think to obtain concrete security assurances than mere then simple mathematical correctness and formalism and This remind me of a nice quote from a paper of 2002 from crypto by Poncheval a stone punch upon lonely and smart They were talking about the error found in the original proof for OEP And they said the use of provable security in is more subtle than it appears and flaws and security proofs themselves might have a devastating effect on the trustworthiness of cryptography and they emphasize by flaws We do not mean plain mathematical errors, but rather ambiguities or misconceptions in the security model So I think that comment was equally valid today as it was 10 years ago Okay, so I think people were expecting a very highly controversial talk from me They might be disappointed. So if you allow me one slide where I can be a little radical So the first point is addressing, you know What I what I sense is a bit of a crisis of quality control in papers and crypto conferences So I think an avenue for positive change is to ensure that security proofs are to get the detailed peer review They need and deserve That we done by insisting that proofs not be in the penises of submitted papers and Referees should be required to read these proofs when they referee papers And to me it's really it's sounding that proofs are viewed as being really important in cryptography But they're always placed in the penises committee members of conferences don't have to read them They typically don't because they don't have time and I understand that They're not they don't appear in the final versions of proceedings and they're sometimes never ever refereed Full paper should be published not extended abstracts And there shouldn't be any page limits on published papers if spring complains about that scrap springer You can stick with online on publications. I think so paper shouldn't be an issue anymore and I would really like to also Lobby for a better balance of the programs of major conferences So I think the cross-pollination of ideas is really really essential especially this time At this point in time in our field consider a murdering PKC chairs FSC with equipped ill neurocapitalism in some way there may be too many talks no problem Let's switch to parallel sessions. It's better having people in the same Same building than in conferences around the world better chance will actually talk to each other Okay, my last remark to summarize so while mathematical proofs certainly have their place in cryptography I think our work illustrates some limitations of such proofs and Highlights the important role that old-fashioned cryptanalysis and sound engineering practices Continue to play in establishing and maintaining Confidence in the security of a cryptographic system Thank you. Thank you our friend. Do you have any question? Hello, so what about TCC? I see you left TCC out of your merging proposal So the invitation asked me to speak about this work, so I think I was I was forced to speak about this One thing that was actually underlying the talk as I heard it, but I want to bring it more to the forefront Proofs of security is really one of the best if not the best way that we have to distinguish Things that are secure for things that are not so I hope this Demonstration of short comes in published paper doesn't don't leave people Wanting to abandon proofs the issue is use proofs. I mean H mark is probably the prime example. I want you Probably the only example that we have in cryptography of a mode of operation that remained two to the 54 secure and the underlying product Primitive that it uses it utterly broken and it's probably because it was designed with the ability to prove security in mind sure, so Hope it's clear proofs will always play a role in cryptography There's no danger in the future of people abandoning proofs. There's a lot of people doing proofs I don't expect them to stop being proofs overnight So proofs will always be around it'll always be fruitful But there's more a lot more to be done in understanding what these proofs actually mean in practice A proposal about papers being 30 pages or more being switched to journal that's for the ICA board to sort of I really haven't been engaged in those discussions I just think you know at some point we shouldn't worry about the length of a paper if the proofs are that important We should be able to read them and have referees read them and insisted referees review them The question Yes No, it's just so the question was you know I had the assumed security of our assays being two to the 80 in terms of a time divided by success probably ratio So really what I'm the the evidence we have is that if you have running time two to the 80 then you can break our say with probability one Yes, but there is no nice trade-off between teen epsilon that I know of so the only real Parameterizing papers is T of reps on being in most two to the 80 That's the best I have to work with and it certainly would be a lot more interesting to understand What that real ratio should be when you make assumptions about the hardness of our side? You said we do non-uniform to gain efficiency or something that's not the reason the reason is we've simply have no clue How to do it uniformly we don't have like non-uniform dense model theorems and so on to do this leak at resilient crypto things uniformly It's open. I was quite sure in your paper You did mention that you did have a proof for your protocol in the uniform model We said we said no no no We said that there are uniform proofs of the dense model theorem and using this there might be chances to get the entire proof Uniform but there are still lots of things that we don't know how to do so Sure, okay. Okay, not an efficiency sure. It's open. Yeah, but your paper that they'll say that you prefer the non-uniform model because you get a tighter Reduction, oh the dense model theorem is extremely non-uniform in non tight. Yes, it's uniform Yeah, and the thing is so the conclusion is you get a tighter reduction But your hypothesis now is necessarily stronger because you can break pseudo and a bit generators a lot faster in the non-uniform Model than in the uniform model my my little attack on pseudo and this applies equally well to pseudo and a bit generators The more questions Versus So I think people got the question and the answers we don't know that's why we need to rely on good old-fashioned crypt analysis To establish the confidence in the system and ultimately maintain it over the time because proofs can't tell you in the end What if the random Oracle model is a stronger assumption than assuming some new complicated interactive decision-definition assumption on pairings? It's really something we need to Gain over time by constant study by doing good old-fashioned crypt analysis proofs will not give you that assurance There is no easy answer a crypt analyst will be in business for a long time Okay, so I think it will be time for the lunch and so let us thank Alfred again