 My name is Andrew Morgan. In this talk, I'll be presenting work done by myself in conjunction with Rafael Pass and Elaine Shea, which proves a lower bound in the security loss of black box reductions from adaptively multi-user secure message authentication codes and pseudo random functions to standard assumptions. Most of the audience is hopefully familiar with message authentication codes or MACs, which are a secret key primitive allowing messages to be securely tagged in such a way that an adversary without access to the key, even after being able to query tags for chosen messages is unable to forge an accepting tag for a new message. This allows messages to be securely authenticated by verifying using the key that the tag is indeed correct and corresponds to the message that was received. Pseudo random functions or PRFs are key indexed families of functions, which are surprisingly closely related to the tag generation algorithm for MACs. The definition of pseudo randomness requires that the output of the function, given a randomly chosen key, should be computationally indistinguishable from that of a truly random function by an adversary which may query the function on chosen inputs. In fact, it is relatively straightforward to see that MACs can be constructed from PRFs, since one can easily show that this notion of pseudo randomness implies unforgeability and hence a PRF can be used directly as a tagging algorithm for a MAC. Perhaps unsurprisingly, MACs and PRFs are some of the most ubiquitous and widely used cryptographic primitives in practice. PRFs are useful as a replacement for true random functions in many protocols, since true random functions are too expensive to instantiate. Meanwhile, MACs are useful in a great variety of protocols where authentication is needed, a notable example being the TLS protocol for authenticated key exchange. The fact that these primitives are used in settings with potentially tens or hundreds of millions of concurrent users highlights one of the shortcomings of the classical definitions of their security. These definitions only consider an adversary which can access and try to break a single instance of the primitive, that is a single key. But in practice, an adversary would be able to access many instances, and we would like to guarantee that this wouldn't help them to break any of them. To capture this, we'll be considering a stronger definition of security originally proposed by Bolare et al. in 1996 for PRFs and considered in several works since. This definition, which we'll call adaptive multi-user security, gives the adversary the ability to make queries to L of N different instances of a PRF, and additionally, to ask the challenger for the keys for up to all but one of those instances before finally trying to distinguish between the final instance of the PRF and a truly random function. We can easily think of an analogous definition for MACs where the adversary, after making tagging and key-opening queries to many different instances, must forge a tag for an unquery message on an instance whose key has not yet been revealed. The good news about this stronger definition is that it is quite easy to achieve. In fact, any PRF or MAC, which satisfies the classical definitions of security, already satisfies the respective definitions of adaptive multi-user security. This can be seen through a very simple reduction where an adversary against single-user security embeds their challenges into a randomly selected one of the relevant instances in a multi-user security game. That way, if a multi-user security adversary happens to choose the correct instance as the one to break, the single-user adversary will break security of its own instance as well. So it seems fairly straightforward to obtain an adaptive multi-user security by just constructing a primitive, which is classically secure, and relying on the reduction from multi-user to single-user security. However, as it turns out, this strategy leaves a lot to be desired in terms of the concrete level of multi-user security we can obtain given a certain level of single-user security. To reason about this, let's first quickly look at how we can quantify concrete security in the first place. Consider some black box reduction R from a primitive, such as our notion of adaptively multi-user secure MACs to some standard assumption C. Security of the primitive follows from the assertion that whenever there exists an adversary A that breaks the security of the primitive, the reduction R can use A as a black box to efficiently break the standard assumption C. And so, by contrapositive, security of C implies that no such adversary may exist. We'll think of the concrete security of a primitive or assumption as the amount of work or expected time needed for some machine M to break it, that is the time taken by M divided by the probability that M succeeds in the security gain. The most efficient reductions, then, are those where there's only a constant factor difference between the work required by the adversary and by the reduction, since, in that case, by the contrapositive, guaranteeing security of the primitive against, say, T-time adversaries, requires only security against C-times T-time adversaries for the assumption. We'll call these tight reductions. If instead there's a poly and factor difference, then we have what's known as a linear preserving reduction and weaker than that is the notion of a polynomial preserving reduction where the work needed to break the primitive is polynomial in the work needed to break the adversary. Anything worse than that is called weakly preserving. The important takeaway here is that tight and linear preserving reductions preserve asymptotic security. For instance, with a linear preserving reduction, security against poly of N times 2 to the N over 3 time adversaries for the assumption implies similarly security against poly of N times 2 to the N over 3 time adversaries for the primitive as well, which is ideal. With the polynomial preserving reduction on the other hand, we might need, say, 2 to the N time security for the assumption, which is probably unachievable, to guarantee the same poly of N times 2 to the N over 3 time security for the primitive. Since we care a lot about the level of concrete security of a primitive in practice, this really makes only linear preserving or tight reductions desirable for practical use. With that in mind, let's look back at our guessing reduction to see how well it preserves concrete security. Unfortunately, as it turns out, the answer is not very well. We can see that the single user security in this case requires L of N times as much work as the multi-user adversary. Since it requires the multi-user adversary to both succeed and pick the correct instance to break. So unless the number of keys L of N is a priori bounded, this reduction isn't linear preserving and doesn't preserve asymptotic security necessarily. Even looking from a practical perspective, the security loss is extremely significant when we consider the fact that there could be 2 to the 30 or 2 to the 40 instances in use at once. So the natural question to ask is, can we get more efficient reductions in this trivial guessing reduction? Specifically, towards constructing adaptively multi-user secure maps or PRFs from standard assumptions, can we either find a better multi-key to single key reduction than this one to avoid the security loss or find an alternative efficient reduction that bypasses single key security entirely? Actually, Ballari et al in 2016 answered this question positively by constructing a PRF with a nearly tight reduction. However, their reduction relies on the random oracle model whereas in our work, we will focus on reductions in the plain model. Meanwhile, for Max and the plain model, Batr et al answered this question in the positive by constructing an adaptively multi-user secure digital signature scheme with a linear preserving reduction. Notably, however, the signing algorithm in their construction is randomized. Ideally, we would like to have a similar result for deterministic signatures for Max as well, since randomness is fairly expensive to generate and represents a significant cost for a primitive as widely used and time sensitive as authentication. Indeed, for this and other reasons, we note that virtually all Max used in practice have deterministic tagging algorithms to say nothing of PRFs, which are deterministic functions by definition. To further focus, we note that PRFs also clearly store no internal state, another feature shared with virtually all practically used Max, albeit with the notable exception of GMAC. So, narrowing down to the most practically relevant notion of deterministic and stateless Max, we can ask our question again. And this time, in contrast to Batr et al, the earlier work of Chatterjee et al suggests a negative answer to this question, ruling out generic black box reductions, that is reductions that apply to any Mac from adaptive multi-user security to single user security of Max. But this still leaves open both the question of whether we can find a reduction for a specific Mac, and the question of whether we can bypass single user security and base adaptive multi-user security on standard reductions directly. In this work, we provide a strong negative answer to this question with our main result. A lower bound that rules out any linear preserving black box reduction from adaptive multi-user security of a deterministic and stateless Mac to any bounded round assumption. Specifically, we show that any assumption for which there does exist such a reduction is inherently insecure, and demonstrate how to break it with a meta reduction that uses the original reduction as a black box. Moreover, this approach will lower bound the concrete security loss of any such reduction by omega square root of L of N. This will rule out linear preserving reductions since L of N can be arbitrarily large, whereas a linear preserving reduction requires a security loss to be bounded above by an a priori fixed polynomial. Whether this bound is tight remains an open question. We also show as a corollary in the full version of our paper that the same bound applies to reductions from the adaptive multi-user security of digital signature schemes and PRFs to standard assumptions. The key takeaway then is that if we want constructions of Macs with efficient security reductions for the case of concurrent security with many users, we need to look towards either randomized constructions such as that of Potter et al, or towards stateful constructions such as GMAC. In the rest of the presentation, I'll go into more technical detail about the meta reduction paradigm and the approach we use to prove our main result. We prove our lower bound through, as I just said, the meta reduction paradigm, which was pioneered by Bonet and Venkatesan in 1998. Let's say we have some primitive PI for which we want to rule out reductions to standard assumptions. Next, imagine a perfect adversary A, which breaks PI with probability one, but does so inefficiently, say in super polynomial time. When we consider a black box reduction R that uses A to break an underlying assumption C, R will successfully break C with some significant non-negligible probability P of N. Of course, since R uses the inefficient adversary A, it definitely doesn't break C in polynomial time, so we're not quite done. If we instead create what's known as a meta reduction B, which efficiently simulates the interaction between R and A with probability close to one, say one minus Q of N, then B must break C with probability P of N minus Q of N, since its messages to C are, except with probability Q of N, identically distributed to those from the real reduction R, which breaks C with probability P of N. What this means is, if there exists any reduction R, such that P of N is large enough for P of N minus Q of N to be non-negligible, the meta reduction B breaks C, showing that C being secure implies that no such R can exist. Our starting point towards our main result is a meta reduction from the prequel to this work done by myself and pass in 2018, which rules out linear preserving reductions from unique digital signatures to standard assumptions. I'll talk through this at a high level before describing the non-trivial changes and additions needed to adapt it to our current result. For this meta reduction, the ideal adversary A, on receiving a public key from the reduction, begins by making a large number of signature queries for random messages. We refer to this number as L of N, since these queries will ultimately be the analog of the key opening queries to each instance in our final reduction. On receiving responses from the reduction R, A verifies using the public key that the responses are correct, aborting if not, and then uses brute force to forge a signature for a new message that likewise verifies with respect to the public key. It's fairly straightforward to see that this ideal adversary will break the security of the signature scheme with probability one. Of course, the next step is to create a meta reduction B, which efficiently simulates this interaction with probability close to one. B starts by making the signature queries and verifying the responses just as A does and will forge a signature for a new message M star if R gives valid responses. However, rather than brute forcing the forgery, B attempts to cheat by extracting the forgery from R itself. B will rewind the interaction with the reduction R many times, sending it the forgery target M star in place of each one of the signature queries in turn until one of two things happens. Either R gives a valid signature as a response, in which case B sends that as its forgery, or if B tries every possible query and fails to extract a response, B will abort and thus fail to efficiently emulate the interaction between R and A. The key lemma from our prior work bounds the probability with which B fails to emulate the interaction by roughly O of M squared of N over L of N, where M of N is the number of instances of the adversary, which can be run by the reduction, and L of N is, as before, the number of signature queries made by the adversary. The proof is out of scope for this talk, but at a high level, it follows from accounting argument, which bounds the number of sequences of messages where B fails to emulate A during every possible rewinding, with extra allowances made for rewindings that could prove problematic to B's functionality or running time. For instance, those involving communication with a challenger for C, hence the need for an a priori bound R of N and a number of rounds of communication, of the assumption C. The key takeaway from this is that like I mentioned before, if any black box reduction R to some assumption C succeeds with probability non-negligibly greater than this bound, when given the efficient adversary A against unique signatures, B breaks the assumption C in polynomial time, contradicting C's security. To show how this implies a lower bound for the security loss of R, observe first that A succeeds with probability one and that by the assumption that R runs at most M of N instances of A, R in the worst case takes M of N times as long to run as A does. We can then simply split the analysis into two cases. If R runs many instances of A, then we can just use the observation that the work ratio is at least M of N to conclude that the work ratio is sufficiently large. Otherwise, if R runs few instances of A, we can still conclude that the work ratio is at least M of N divided by the success probability of R, which by the bound we establish yields the same conclusion. And as before, since L of N can be an arbitrarily large polynomial, this rules out all linear preserving reductions from unique signatures to bounded round assumptions. So, bringing this back to the case of adaptive multi-user security of Macs, a first attempt at a meta reduction would be to try applying the technique we just described to the key opening queries in the security game. Given L of N different instances of a Mac, B could make key opening queries to all but one of the instances in a random order and then rewind, trying to query the key for the target instance in place of each of the other queries in turn until R responds correctly and finally using the extracted key to forge a tag for a random message. This sounds great at first, but if you think about it for a second, two problems emerge pretty quickly. First, unlike signature queries, which we could verify with the public key, we have no way to tell whether R is sending the correct responses to these key opening queries from just the key alone, which is a problem when we need to extract the correct key for the target instance to produce our forgery. Second, and relatedly, we haven't discussed the ideal and inefficient adversary A yet. A needs to be able to brute force a forgery, which is impossible to do without some way to internally verify whether a guess is correct. What we do to provide this internal verification is as follows. We have A and B additionally make some numbers, Q of N, of tagging queries on random messages to each instance before opening any keys. Then once R responds to a key opening query, we can check whether that key is consistent with the tagging queries we made. If it isn't, then the key is invalid and we can abort. Furthermore, A can use the queries made for the target instance to brute force a key to use for its forgery. Unfortunately, this approach isn't perfect since intuitively it's occasionally possible for an incorrect key to slip through and by pure coincidence agree with the right key in all of the Q of N tag queries. We can, however, show that the chance of this coincidence is fairly small. Specifically, we do so through this lemma. Let's say we have some pair of keys that produce the same output on our Q of N tag queries, then it's overwhelmingly likely that they will agree on most messages. What this means is that if during rewinding, the key recovered by B is consistent with the queries, but somehow different from the one A brute forced using the results of the queries, then it's likely that A and B will still end up with the same actual forgery. The proof is fairly straightforward. For any key pair that doesn't agree on most inputs, we can show that the probability of that pair agreeing on our Q of N random tag queries is exponentially small, even when we factor in the fact that there may be up to two to the two N possible key pairs. The fact that we can actually deal with this issue is interesting in and of itself since the one prior work that ran into this issue, specifically in the context of a meta reduction for authenticated encryption required key uniqueness as an explicit property of the primitive. That is that any two keys which agree on Q of N queries must agree over all inputs. In contrast, we show that we can use additional queries to guarantee that keys will still agree on the forgery target with high probability, even without a strict requirement for it. There's still an additional issue. There are some technical details concerning a rewinding lemma, which required that the forgery output by A or B has to be unique all of the time rather than just with high probability. At a high level, we need to show that the reduction R attempting to rewind the adversary A is pointless and will yield no additional information as rewinding by R can potentially blow up the running time or the failure probability of the meta reduction B in an uncontrollable way. However, if A or B could potentially output multiple different forgeries for a single set of tag queries, this is clearly not the case as R could attempt to rewind A to try to extract different forgeries from the same instance of A. So to deal with this, we need a variant of B that does guarantee a unique forgery. Now, we can't construct an efficient meta reduction that does this due to the key uniqueness issue we were having before. However, suppose we consider an inefficient meta reduction, say B prime, that does as follows. First, it acts as B does, making tag queries, opening everything but the target instance and then rewinding to try and extract a key. However, when it does recover a correct key, B prime will brute force all possible keys that agree with the tag queries for the target instance so that it can choose a deterministic one of those keys, say the lexicographically first of them. And since A is already an inefficient adversary, we can have A do this as well, so that B prime and A will do the same thing as long as the rewinding works and any key is recovered. Furthermore, if the key is recovered by A and B, agree with respect to the randomly generated forgery input X star, then B and B prime are identical since B prime recovers the same key that A will. So, we can use a hybrid argument to find the probability with which B fails to emulate A, by first going from A to B prime and considering the chance that they behave differently and then doing the same thing from B prime to B. Going from A to B prime, now that B prime gives us a unique forgery is actually a straightforward application of the lemma from the meta reduction for unique signatures. The only difference is that we're rewinding the order of the L of N key opening queries this time, rather than sequences of L of N plus one messages, so we lose the constant in the denominator. And going from B prime to B is just an application of the key uniqueness lemma I went over a few slides ago on each of the M of N different instances of the adversary A that R runs. Furthermore, because Q of N is a parameter, we can set it large enough in comparison to L of N, so that the chance of B prime and B behaving differently is minimal compared to the difference between A and B prime. So ultimately we get this bound in the probability that B's emulation of A fails. From here, we can essentially use the same argument as we used for unique signatures to bound the work ratio. Well, with the minor caveat that A no longer succeeds with exactly probability one as it did before. The reason for that goes back to key uniqueness and the fact that even with the additional tag queries added, A might still choose an incorrect key which agrees with the actual key in all of the queries. But by once again leveraging the key uniqueness lemma, we can still show that the success probability of A is high. So this has very little impact in the actual bound we achieve, especially because Q can be an arbitrarily large polynomial. In the end, we still rule out all linear preserving reductions, obtaining a bound virtually identical to that of our earlier result for unique signatures, though we note that whether either of these bounds is tight remains an interesting open question. So to summarize a result, we ruled out all linear preserving reductions from the adaptive multi-user security of a deterministic and stateless Mac to any of a wide variety of underlying assumptions. This result, as can be seen in the full version of our paper, also easily extends to reductions from adaptively multi-user secure digital signatures and PRFs, as well as to reductions to any assumption that has a stateless challenger even if there is no a priori bound in the number of rounds of communication. For instance, single user security of PRFs would be such an assumption. In addition, we used a key uniqueness lemma and a hybrid meta reduction to deal with the fact that unlike all prior meta reductions, our setting doesn't have a unique correspondence between ours responses to queries, in this case, the keys and the forgeries that our meta reduction outputs. This is a particular interest because even though the Macs we consider are being deterministic and stateless, technically unique relations, it suggests that one might in the future hope to use this or similar techniques to apply the meta reduction paradigm to even primitives that aren't unique, which so far hasn't been done even in a restricted setting. With that, I'll conclude my presentation. Thank you for listening.