 And we want to know for which ones the construction is secure. And this function could be reduction by P, but could, for example, be simple bit truncation that keeps the first R bits of the state. And we also study a related and simpler to analyze construction, which we call the augmented cascade construction, which is different in that the keying proceeds through the initialization value and not through prepending to the message. And in fact, in this talk, I'm going to focus on augmented cascades. You have to trust me that it's quite easy to get a bound on H, AMAC, generically, or almost generically giving a bound on augmented cascades. So in fact, in the paper, our analysis consists of two ingredients that I'm going to talk about next. So the first one is actually a standard model reduction that reduces the multi-user security of the augmented cascade to a new multi-user PRF assumption on the underlying compression function. And then, at the end, to get some numbers and compare it, since it's a new assumption, to compare it with new construction, we also do an ideal model analysis of this assumption that is going to give us some concrete numbers. Let me mention in passing here only that we are not the first to analyze these type of constructions. But so far, analysis of variants of Merckledangar with truncation have only been studied in the context of indifferential ability. In particular, the work by Coronet, Al, that introduced indifferential ability for hash functions, analyze a chopped MD construction, which is a Merckledangar with construction with truncation, with the goal of getting a keyless hash function. And they prove indifferential ability for it. And it's not too hard to get a bound for PRF security in the ideal model from the indifferential ability proof pretty generically. But the point is that the bounds that you will get are too weak and do not highlight the key feature that we are going to obtain through our analysis, direct analysis for PRF security. Okay, so let me start with the more technical contents of this talk, starting with the standard model component of this analysis. So what we, so the first thing I want to do is I want to like formalize a little bit more our security target. And first recall that the standard PRF security notion requires the construction, in this case the augmented cascade construction with some output function out to be indistinguishable under a secret key from a truly random function with matching input and output domains. And indistinguishability here is measured by requiring that the PRF advantage is small for all feasible adversaries or distinguishers, A, where the advantage is defined by the difference of the probabilities that the adversary outputs one on the left and the probability that the output's one on the right. Now what we want to do first is we want to extend this notion to the multi-user setting and the standard way of doing so is to consider an a priori unbounded number of instantiations of the construction under independent keys and the distinguisher now can access this arbitrarily and adaptively, for example by making queries to the first one and then getting an answer and then deciding upon its answer to make a query to another instance and so on and then make a decision. And we want this to be indistinguishable from a setting where all of the instances are replaced by independent random functions. And again we can define a natural multi-user PRF advantage by comparing the probabilities of outputting one in the two settings. So this is going to be our target. And let me first point out that there's a folklore fact that relates multi-user and single-user security. So the multi-user security can never be more than U times worse than the single-user security in concrete terms, where U is a bound on the number of users or instances the distinguisher decides to make queries to. And this is proven by a standard hybrid argument. And if you go this route, actually this factor U loss is actually substantial. It can be as large as the number of queries in the worst case. But fortunately, it's not always necessary. And as you will see in this talk, in many cases it's worthwhile to target proving bounds on multi-user security directly rather than going the hybrid argument route. OK, so given I specified what the target is, let's look at the assumption. So ideally we would like to prove our result, only assuming that the underlying compression function is a good pseudo-random function. So, or maybe a multi-user pseudo-random function. Now unfortunately, this is not possible. I'm not going to make this statement formal, but intuitively it's enough to look at the following situation. Imagine that the attacker is querying one message and then the extension of that message with an additional block. Then the attacker will find itself in such a situation when he learns two such values. And what we can see is that the lower value intuitively is giving us some partial information about the key value which is used when evaluating f to get the upper value. And you can actually build, depending on the function out, very easily as a little homework exercise, some PRFF that completely break down with respect to security when using this context. So what we actually need to here assume intuitively is that the compression function f remains a PRF even when you leak information about the underlying key using the function out, the output function. And this is exactly what we formalized. This is our PRF security notion under out leakage which does exactly this. So it requires indistinguishability from a random function in a setting where in the real world we are given out of K for the function out and for the extra key. And in the ideal world we just sample a random independent key and give to the adversary out of that key which has nothing to do with the actual system at hand. And we can define a corresponding advantage. And also I'm not depicting it here but you can also extend naturally the notion to the multi-user setting by having multiple instances. Note that this is, despite the name, this is much simpler than the traditional notion of leakage-resilient PRF considering the leakage-resilience world because we really have one fixed function, a priori fixed function under which we get leakage and that's it, no arbitrary polynomial time function or so on. Okay, so give us this notion, we can count our first main standard model result which is a reduction that shows that if the underlying compression function is a multi-user secure PRF under out leakage then the augmented cascade construction is also a good sort of random function for in the multi-user setting. We actually proved this quantitatively and it's interesting to have a cruiser look at it. So first of all what you see is that the bound incurs a loss, a concrete loss of factor L loss where L is a bound on the number of blocks in each query made by the distinguisher. Also another important point is that you can't really get a PRF or any possible out. Clearly I haven't talked about it but the properties of the function out matter. So if out is a constant function how can this be a sort of random function? So there is an additional term that characterizes when out is a good function or not is this delta out which essentially measures how well out preserves random input and produces a random output. Is it just a statistical distance of the output under random input from a random output? Okay, but an important point here of this result conceptually is that we are actually reducing to a multi-user security assumption, okay? And this will be very worthwhile. We could apply the hybrid argument here and reduce directly to a single user assumption but you will see that it will be much more worthwhile here to reduce to multi-user security. We're gonna get much better bounds in this sensation, okay? And also note that the bound doesn't depend explicitly on the number of users and this is something that we do not really know how to do with other construction. So let me give you like a three minute overview of the proof here, okay? So the proof really follows standard techniques in this area, I mean at least a standard framework in this area to model in the interaction between the distinguisher and the construction all of the internal values that have been computed by the augmented cascade construction through a tree, a label tree. So for example, if the distinguisher queries a message consisting of two blocks and one and two, we can think of this as defining a path where the root is a node that contains the actual key value and then when we evaluate on the first message block, we're gonna move to another node that contains the state of the cascade after one call to F as its value and then so on after two calls and finally we produce this little node on the side which is going to contain the actual value you get after applying out to the previous state. And you can go on doing this over multiple queries and you're gonna get a whole tree and the height of the tree is going to be at most L which was the bound on the number of blocks in a query and what we wanna show now is that we wanna show that the values containing these little nodes so they're actually those given to the distinguisher as to the random. And we can do this by having a simple hybrid argument here, fairly simple where we actually replace the value layers after layer first in the first L-ibrates by using our multi-user assumption under leakage and it's essential to have the leakage in order for the reduction to go through because you get some values leaked about the key values in the higher level. And then in the fire hybrid we just replace the output values with uniform one using the combinatorial assumption on the out function but at triangle inequality this gives us a bound on the advantage. Okay, now this is great but the reason we wanna move to the ideal model analysis of this is that we want to make a comparison so without a construction. So now we have this standard model result but the reduction is to a new assumption on a compression function which is not validated by crypt analysis at all so we don't have good numbers to put in there to start with. I mean it's a great question but we don't have them. So instead what we do in order to have a good comparison we transfer to the ideal model and for example by instantiating the compression function through a public random function or through Davis Meyer in the ideal cipher model and on top of this this is gonna give us at least some good value to assess security against generic attackers and it's gonna also allow us to compare the construction in this model with other constructions. So I'm gonna focus now on the random compression model here and random compression function model here and the question we want to address to instantiate the theorem is how large is this advantage in this model? So by this I really mean we are in a world where we have a public random compression function in the sky that everyone can access including the adversary and we are going to use it to instantiate the construction and in the rest of this call in the final minutes of this talk I'm going to focus just on the case where our output function is very simple is just plain truncation that just keeps the first R bits of the state. This will make things simpler. So just to make it clear again what I mean by the model so for the standard PRF security of the compression function what we will consider is a setting where we choose the compression function uniform in random and a key and then the attacker can make queries to the compression function under this key so he has a budget of Q queries and then can make also direct queries to the compression function under both arguments and has a budget QF queries of this before making a decision and we compare this to a setting where the compression function under the key is replaced by a random function with matching parameters and it's easy to prove in this setting that the best advantage of an adversary is bounded by QF over two to the C where C is the size of the, yeah of the actual first second argument and this is just because as a bound here you can just take the probability that the distinguisher makes a query, a direct queries that contains the secret key as the first argument and this is very easy to bound this probability and it only depends on QF but not on Q and that's important, that's a standard argument. Now what happens if we actually have leakage? So if we have out K additionally given to the adversary then the analysis carries over very naturally. Now the adversary learns R bits of the key but still has uncertainty over the remaining C minus R so all that happens is that the bound is gonna change in the bound that the denominator is going to change to two to the C minus R instead of two to the C. That's a very easy argument but the interesting things happen when we translate now to the multi-user setting. These are all for the single user setting and interesting if you are, this actually becomes really interesting because in the leak free case it turns out that if you want to get a corresponding multi-user bound where the attacker make Q queries distributed over at most U users so U independent keys, the best you can do it really here is apply the hybrid argument. It's essentially tight so the advantage increases by a factor of U plus there is a U square over two to the C term to account for collision in the keys. So in the worst case we will expect the same to happen under leakage so the hybrid argument will be applied and we will get something there and essentially we don't get any benefit in having a multi-user PRF assumption in our theorem but luckily it turns out that this is not true and maybe somewhat surprisingly we actually provide an analysis here of the multi-user PRF security of a random compression function under leakage and it turns out that you can heavily beat the hybrid argument here by proving a bound directly and in particular the security really only mildly decreases once you increase the number of users. So the leading term in our advantage bound is very similar up to a multiplicative factor which is not excessively large to the single user bound so there's almost no security loss. There is another term which is by far not being the leading term which is actually the same that we got for the leak free case here. Okay, and this is gonna allow us to instantiate this advantage term in our main theorem in the standard model theorem to get an ideal model, so this is tight I should have mentioned and this is gonna allow us to instantiate our theorem in the ideal model and get a bound. Now you don't have to read and parse the all the bound it's pretty complicated. This is for just bit truncation but the key point here is that if you compare it with an mmec bound which is not published anywhere for the same problem but you can compute it, it's pretty folklore. You will realize that the first term of the bound is actually pretty similar so there's no much difference and the second term which is actually the leading term or can be potentially dangerous because he has a much smaller denominator so it could be larger but the great thing about this this is exactly what you wouldn't get by applying the differentiability bound directly is that this term if you look at the numerator all terms involving query numbers are linear in the number of queries so there's a q and there's a qf it's divided by two to the c minus r so it could be potentially larger but we have no square upset and so in fact if you are in the setting where for example r is smaller than c over two which is very common otherwise you will use a smaller hash function to start with then there's really no much loss so essentially security is going to hold up to the same security level and if we go back to our original application so where we had reduction mod p that was the case for the signature scheme then we can actually do the same apply our theorem generically and we can get a similar bound that I'm not going to get into details but the main message is that everything is fine okay so what we just to wrap up concluding remarks so this is really first of all is a practical implication of this word which provides the first validation of the PRF construction which is used inside a very popular signature scheme and that was not validated before and also overall the construction really provides a simpler and more efficient alternative to HMAC and NMAC in settings where it can be applied so for example where truncation is possible and it has comparable security to HMAC to HMAC and NMAC in this setting and I think the other important thing which is not explicitly in the slides but I don't want to stress out there is really an interesting conceptual point here which is that no matter whether you target multi user the probably you should I will argue or single user security there is really some value into making a multi user security assumption I said of a single user one as we're used to because it really allows us to get potentially sharper numbers for that underlying assumption to get stronger concrete security results okay that's everything I wanted to say thank you thank you for the talk and do we have any questions I can see that we have there is a microphone coming from the back so the motivation of all this was so that in this AdWords whatever DSA the R for the randomness gets generated using a PRF with a secret key for each user and you want that to be tight so that was the motivation so couldn't you achieve the same if you just instead of generating R as PRF of this secret key that you just add the public key to the PRF input then you would probably get a tight security reduction with a normal I see what you mean by tight I mean this is it's different so there's two issue right I mean I don't think with respect to TINUS I don't think this will be a problem so if you use a PRF and you do what you do it's okay I think TINUS you get through many construction I mean this is really about analyzing the actual construction that they use inside and that's motivated by that but I'm not arguing that's the only way you can get tightest so you can do it with any PRF construction and by doing the trick that you want so the trick would be to add the public key to the PRF yeah so I think you probably have to do that anyway so yeah but then you don't need a PRF in the multi-user setting right I mean the whole thing or the whole idea of analyzing PRF in the multi-user setting is that you have many many keys meaning that are coming from many users but if you also hash or if you add the public key to the PRF you don't have this issue right I see your point but I think it's orthogonal to this right I mean we're just taking the construction out as a motivation and analyzing it as a multi-user it might be that for the specific application you don't need to do as much if you do it right I agree with you we have time for one more short question okay so I don't know hopefully it's short so how do you compare this I mean it seems like you're kind of intentionally or I guess as a designer I mean you just analyze what somebody suggested but you're shooting yourself in the food by just giving for free this kind of part parts of the key it seems so you know there is this 2 to the n minus c kind of things in the denominator wouldn't it just be better to ensure like prefix freeness for example or to just have one of the much simpler kind of solutions right because we don't have 2 to the n minus c kind of bound it's just strange that you lose safe like two hash calls or something like that but you're like giving up so much well I mean again I'm not really the right person here maybe Dan could answer it better so from the practical standpoint this is not the only example where this isn't being done you can make the same argument for HMAC so someone could just use directly the cascade construction for prefix free encodings but people just didn't like using prefix free encodings at all but from a theoretical standpoint of bounds yes you could do prefix free encodings and you will get the same application we're really analyzing this concrete construction which is more efficient from the perspective of concrete security you can do plenty of other things but you know that construction has been around that was published in the same year as HMAC and NMAC and I don't know if anyone was really using it right so let's take any further questions offline let's thank Stefano again for the talk and we can now move to the second talk of this session we have to wait a minute for mic'ing the speaker up and then we can continue so in the meantime as you can see the next talk will be on the influence of the message length on the security of PMAC and the authors of this paper are Atul Luix, Bart Pranel, Alan Schepenetz and Kanya Suda and Atul is going to give the talk and he's just arriving at the stage it will be I think once you start okay now it's good okay so Stefano he talked a lot about security bounds in his in his previous talk and that was a big thing throughout his presentation and exactly computing what the bounds were for AMAC and I'm going to continue talking about bounds so just before I continue I'm going to step back a bit and look at the motivation why we're so interested in the bounds well so when you're trying to pick parameters for cryptographic schemes the security bounds are one of the most useful tools in I'm sorry just need a bit different presentation there you go so I changed it last minute so the security bounds are one of the most useful tools we have in determining what parameters we can use and still remain secure so the security bounds they tie in together adversarial resources the schemes parameters and what the schemes properties are and also you need to pick a particular confidence level against what's the maximum probability that you want the adversaries to have success then tying all of these three things together and relative to some assumptions some reasonable assumptions then you can make a graph like I'm showing on the right over there where the axes represents the adversarial resources like the number of queries or the message length that the adversary can make and you divide the resources into a kind of a secure zone where you know that if the adversary stays limited to the secure zone then you'll be fine but then if you go beyond then all bets are off and this is actually the security bounds are actually used in practice in well standards organizations are actually looking at these so they need to decide whether to include a key update function for a GCM based on the bounds computed using these security bounds from the literature and in the ISO standardization process they're now deciding whether a 48-bit block size is that big enough as a block cipher because Simon and Speck they're going through the standardization process over there and they might standardize a 48-bit block size and security bounds come into play over there as well so let's look at a concrete example EMAC it's a very well known PRF and the way it works is you take your message you chop it up into blocks and then you take the first block you process it into a random permutation in practice the random permutation is a key block cipher then you take the output of the random permutation and you're like this the first part of there is what's often called CBCMAC the last part is another permutation call usually with an independent key and that's why it's called encrypted CBCMAC or EMAC now the security bounds for EMAC let me say the initial security bounds for EMAC were of this form over here so I just walked through what these symbols mean so Q is the number of queries that the adversary can make L is the length of the messages that the adversary can make N is the block size of this block cipher or permutation and then you get this polynomial over here in the adversaries resources and divided by exponents and on the right over there you have your confidence so you can actually then compute some numbers using this equation over here let's say we don't want adversaries to see more than one in a million so that's one over two to the twenty and let's say that your messages are all going to be about a kilobyte long now what happens when you plug in A is 128 present or cut down 32 into EMAC each of which have different block sizes block sizes 128, 64, 32 bits well you can see how many queries you can make on the right hand side over there for AES because it has a large block size you can make 251 queries with present it's reduced to the 18.5 and with a 32 bit block size you're restricted to four queries that you can make with EMAC before going into the insecure zone so I'm more of a visual person these individual data points are a bit hard to picture so what I did was I plotted this bound over there so now what's not displayed over here is that I picked a 32 bit block size and I assumed that the confidence level set to one in a million then here along the horizontal axis the number of queries that the adversary makes increases and along the vertical axis the message length so let's say that you want to resist adversaries well let's say that your messages are of two to the six blocks over here and you want to make two queries well then you're already in the insecure zone and unlike this you can see what happens with various parameters so if you're making two to the six queries then all of a sudden you're straight away right at the edge of the insecure bound so this is quite a serious restriction with a 32 bit block cipher so as a result people have looked at EMAC more closely and they've come up with better bounds so here you notice that all of a sudden you can make much longer queries and it's also a log log graph so things increase exponentially as you go along the vertical axis and then people continued looking for better bounds so this bound over here is actually better when you look at bigger block sizes but due to the constants in the bound it doesn't turn out to be better in this case and then okay these are all this is all the research that we have right now on EMAC it's still a serious restriction so if you're using a 32 bit block cipher the question is okay could we still just kind of ignore these and go beyond these bounds well for that we need to look at attacks and you've got this entire red zone over here which is the birthday bound so there's a pre-nail van Oskalt attack which shows that you cannot go anywhere over here because then you're in an insecure zone and then if you look at the papers which are these bounds well they also you can see that you can find an attack on the message length so you can't go anywhere up here so that limits EMAC to being used within this first quadrant now the remaining area of the first quadrant that's still unknown but if you do plug in a random function instead of a random permutation into EMAC then this entire zone gets filled in red as well so now with these limitations of EMAC okay so then what can you do if you're really dead set on using a 32 bit block cipher what are your options what the only thing left to do is to switch schemes so well here for clarity I wrote down EMAC now if you want to go beyond to the right there are a few schemes which allow you to do that there's a sum of CBCs there's a PMAC plus 3KF9 these are also called beyond birthday bound constructions then now besides the beyond birthday bound constructions there are also constructions which allow you to query much longer messages which are going basically vertically up there there's PMAC with parity, LightMAC and PMACX now the funny thing is I don't know of any constructions which are in this quadrant over there but you could probably use techniques from the other two quadrants to get into that last one in either case the focus of this research is understanding this message length over here what's happening what are these techniques that are used over here and in particular if you look at these three constructions which alleviates this message length restriction well they're all a pretty similar style of PRF there are these so called XOR style well what I'm calling XOR style PRFs so the way they work is they take the message and from the message they're going to compute a whole bunch of block cipher or permutation inputs so I label the X1, X2, X3, X4 because it's not as simple as in EMAC where you're just chopping the message into blocks you're actually chopping it into blocks then reusing some blocks and multiplying XORing masks into there and then what they do is then they compute the block the permutation outputs and XOR all of those guys together what I haven't shown here is that then there's going to be an output transform but that's not so important right now so this is also completely different from EMAC's cascade style so this is how they're able to alleviate this message length bound from EMAC and then notice that two of these constructions have this name PMAC use this name PMAC in the title well that's because the way they compute their X values block cipher inputs was inspired by this construction called PMAC which is short for a parallelizable map so the way PMAC works is it starts like EMAC, it chops your message into message blocks then it's going to compute these masks over here so the masks they're computed by taking some constants and then multiplying them with some intermediate key it's just computed as the output of the block cipher under zero so it masks all the block cipher calls then does the exact same thing of what I described over there then it PMAC uses an output transform on this result over here which I'm calling P hash now so the advantage that PMAC has over PMAC with parity and PMAC X is that it's a lot more efficient so it's actually just going to make one block cipher call per message block whereas PMAC X and PMAC with parity or even LightMAC they'll do at least two or three block cipher calls per plaintext block also to be clear and I described P hash over here very generically so this multiplication is done in some finite field and the actual instances are used PMAC so there are two known instances one using gray codes and that's just basically picking what these constants over here are so now you can also ask what does PMAC's bounds look like because it's used for a lot of these high security extensions so then you can see over here that it's just like EMAC it's still in the first quadrant you still have with PMAC you still have this birthday bound, this Pneil and Warnowska attack but actually up until this point this message length attack this was non-existent nobody knew so then you have all these constructions which are alleviating this message length dependence and which are based on PMAC but we don't even know if PMAC itself actually provides us with the bounds up there the question is basically are PMAC's bounds up there or not or is there an attack so this is the basic research question that we set out to solve or at least explore can PMAC be moved up there I'll just go now to explain my results in the paper we actually focus on this P hash now why do we focus on P hash finding a collision in P hash means that you can find a collision in PMAC which results in an attack on PMAC hence if we can find a collision in P hash which increases with the message length then we'll have our results so what we concluded was basically that message length dependence changes according to the masks so this is quite a stupid statement it's deceptively simple if you forget everything else from the presentation at least remember this but what does this mean now over here I've drawn a big circle which is supposed to represent all the P hash instances so that means you can change what the finite field is and what the constants are and the masks so you have grey codes in there you've got powering up in there as well so what we saw was that well infinitely many of these instances have a collision upper bound to the end which means that their message length is dependent or there's this problem of given an arbitrary P hash instance can you generically find a collision with a high probability and that problem is computationally hard which is based on a conjecture explained in the paper so basically we're in either one of these two settings with PMAC in general so that's what I mean by it depends on what kind of masks you pick and if you look at the concrete instantiation of PMAC well we found an attack on PMAC with grey codes which does exhibit dependence on message length so if we go back to the picture that I drew earlier on so this arrow over here we've basically provided evidence that there are instances of PMAC out there which could be up with the other constructions over there but on the other hand there's no case at all like with grey codes where you have message length and this good so then perhaps I can just briefly explain some intuition as to why this is the case I explained the motivation and the results and now just briefly what's happening basically so it's useful to compare this P hash with XOR hash XOR hash is some other construction which is used for a light MAC but it's not simpler the way it works is you take your message and you divide it into half blocks the remaining half of the block you use as a counter so you're actually sacrificing half of the input of this permutation for this counter which means that you're going twice as slow as PMAC a P hash over here so what happens in a collision for XOR hash you get this long sequence of block cipher outputs which XOR to zero why is that the case here's one message and you want those two to be equal since it's just XOR or a bunch of block cipher or permutation outputs you can XOR all of them together and try to figure out if it equals zero so one thing that we can observe in XOR hash is that if you take this first block cipher input well then it's never going to equal any of these first other three block cipher inputs because of the counter similarly with the second message so none of these guys are ever going to collide with each other because of the forced separation due to the counter and you can also make the same conclusion for the first with these message blocks over here so there's a lot of message blocks which won't collide with each other but message blocks that can collide with each other are once with the same counter so the first message block over here can collide with this one second one can collide with that one but the only way that they can collide is if these half blocks equal each other which means that they will cancel each other out over here in the XOR but now even if there's m1, m2, m3 and m1, m2, m3 prime even if they're exactly the same block cipher outputs XORs into here which means that your collision is very unlikely to happen because the probability that this guy equals zero is very low so XOR hash removes its dependence on message by actually forcing these block cipher inputs to be quite different so obviously you can't do this with P hash because there's no forcing of explicitly forcing of the block cipher inputs to be independent yet to kind of make sure that they're distinct from each other using this masking here and the secret value now if you use a naive approach you can just say okay you can try to bound the probability that any pair of block cipher inputs doesn't collide which means that you then get L over 2 L over 2 is two probable possible collisions and as long as you don't have any collisions you won't have an XOR to zero but then you get a really bad bound and that's dependent on the message length but then an observation that's already been made is that you can do better than that let's say that this guy collides with this guy and this guy collides with this guy which means that these two will cancel out and these two will cancel out and let's say that this group of three over here they all collide with each other two of them will cancel out but still one of them will remain which means that you have at least one block cipher output and hence you have a very low probability that that output will equal zero and so the only way so one other way where you can prevent pH collisions is by basically computing the chance that you always have at least one odd group over here every time you have an odd group of collisions even when the outputs all are X or together you'll have at least one block cipher output resulting in a very unlikely collision okay so then the approach there is that we take in the paper to try to analyze whether how likely these odd groups or even groups are to occur is so let's say that these blocks over here this is a set of all those blocks on finite field so you take the message block, you take the constants and you map them to a point in this affine plane, you do that for all of them and you basically the only time that under a particular key they all cancel each other out is if they are lying on the same slope and an adversary is going to try and maximize the number of slopes on which they all lie on the same line anyway so that was just briefly the intuition for why it's complicated so again pMac message length dependence non-trivial, there's a whole bunch of open problems left, what happens with the powering up what are the optimal masks that you can use with pMac and also we showed that if you have a collision in pHash then you definitely have an attack in pMac and now what about the opposite implication right if you have a PRF attack against pMac what does that mean for the masks thank you for your attention thanks for the talk, we have still time for a question or two if there are any questions there is one so what is the exact bound you get for gray codes, you said it's non-trivial for gray codes for gray codes we have an actual attack you can find two messages which collide you said message dependence so what is the bound what is the bound so the bound is if you have a message of length a power of two then the bound is literally that power of two divided by two to the n so if your message length is two to the k then it's two to the k over two to the n roughly with ignoring some constants okay so L over basically yeah any other questions if not then let's thank Atul again thank you