 Hi, my name is Karin Klein and this talk is on the cost of adaptivity in security games on graphs. It is joint work with Chetan Kamat, Krzysztof Piedzak and Michael Walther. In cryptography we often define security of some scheme by a game between an adversary and a challenger, where the adversary observes all the system, but it might even control parts of the network and might even request some private information. For example, in the setting of identity-based encryption, the adversary would learn the master public key that it can be used to derive public keys of each identity. And then it can corrupt certain parties to learn the secret keys. As a challenge, it would choose a challenge identity i star in two messages M0 and M1 and in response receives an encryption of one of these two messages under the public key associated with this challenge identity. Clearly, this challenge identity must not be corrupted during the entire game and the adversary's goal is to distinguish the case where M0 is encrypted from the case where M1 is encrypted. To prove security of a scheme, we then relate it to some hard problem. Such a hard problem is very often the security of some simpler cryptographic primitive for which we already established security. And to formalize the intuition that any adversary who could break the scheme could actually break this hard problem, we define a reduction that interacts with this adversary and tries to extract some information from this interaction that helps it to break the problem pi. We then prove statements of the following form. If the adversary breaks sigma with advantage epsilon, then the reduction breaks pi with advantage epsilon over loss. And clearly, the smaller this loss factor is, the tighter the reduction and the stronger the security guarantees we obtained for our scheme. In this talk, we only consider fully black box reductions, namely reductions that only have oracle access to the adversary. So they do not see any internal workings of this adversary, but only its output behavior. And furthermore, the adversary might even be inefficient. In many cases, it is significantly simpler to construct such a reduction if the adversary's choices were known ahead of time in the very beginning of the game. This we call the selective setting. However, in most applications, we require stronger adaptive security where all the adversary's choices might depend on what it learned during the game so far. In this talk, we initiate the study of lower bounds on a security loss against such adaptive adversaries. We consider certain multi-round games that capture several interesting existing constructions. In all these games, the adversary's queries form edges of some graph. The first such game is called generalized selective decryption and was introduced in the setting of multicast encryption protocols. Here nodes represent secret keys and edges represent encryptions of keys and other keys. Well, this game makes sense for arbitrary graph structures. In most applications, the adversary would actually be restricted to query either paths or binary trees. The next game we consider is the security of tree cam, which is a widely discussed protocol suggested by the MLS working group for continuous group key agreement. Here nodes represent public keys, sources represent users, and sinks in the graph represent group keys, and the graph structure is a tree. And the edges are induced by encryptions of secret keys on the public keys. This slightly simplifies the scheme, but we do not want to go into details here. The third game we consider is the security of the GGMPRF as a prefix-constrained pseudo-random function. Here nodes represent seeds, the graph structure is an exponential sized binary tree, and edges represent PRG evaluations. The last game we consider is the security of proxy ring encryption schemes. Here nodes represent public key pairs and edges represent re-encryption keys that allow an untrusted proxy to re-encrypt an encryption under the source public key, into an encryption under the sink public key while preserving the underlying message, but without learning anything about this message. Let me give you an overview of our results. As mentioned before, in the case of GSD and PRE it makes sense to consider different graph structures, and for those graph structures mostly used in applications, there is actually significantly better upper bounds. We establish lower bounds that almost match all the best upper bounds. However, in some cases we have to make quite strong restrictions on the class of reductions. Namely in some settings, we restrict the reduction to behave obliviously to the adversary's behavior. By this we mean not only that the reduction is non-rebinding, but furthermore it doesn't make use of this partial graph structure that it learns during the game. Our main conceptual idea is to introduce a combinatoric game, which we call the Billard-Pabla game, which is a two-player multi-stage game that abstracts out all the combinatorics behind our lower bounds. We then translate the Pabla's success probability in this game into lower bounds on a security loss. And for this we use oracle separation techniques. In this talk, I will only establish one combinatorial upper bound, namely a bound for builders restricted to trees, and then show you how to apply this combinatorial upper bound to obtain lower bounds for GSD on trees, for tree camp, and for the GGM constraint pseudo-random function. Let me explain you how GSD works. This is a multi-user game introduced by Panjwani in the setting of multicast encryption protocols. It is in the secret key setting, so each of these users possesses a secret key and we represent each user as a node in a graph. The other theory can then corrupt users to learn the secret keys and furthermore can query edges between these nodes and in response it receives encryption of the sync key under the source key. So knowledge of the source key allows to decrypt the ciphertext and learn the sync key. The other theory can do more and more such queries in a fully adaptive way and at some point does a challenge query and in response it receives either the real key associated to that node or a random independent key. Clearly this challenge shouldn't be trivial, so this challenge node must be a sync in a graph and furthermore must not be reachable from any graphed node. So in the end we see a graph structure on a set of nodes which we call the encryption graph and in many cases we are interested in a subgraph of this encryption graph which we call the challenge graph which is induced on the ancestors of the challenge node. Clearly the goal would be to prove adaptive GSD security based on the in-CPA security of the underlying secret key encryption scheme. Now the intuition for our lower bound is that a reduction can only make use of a successful adversary if it embeds an in-CPA challenge at some edge. However it has the freedom for each edge accept this challenge edge to choose whether to answer this edge correctly or incorrectly which we call a fake edge. Furthermore we observe that the reduction cannot create encryption of the in-CPA challenge key. So this employs the following rule namely that all edges incident on the node where the reduction embedded the in-CPA challenge key must be fake. Keeping this intuition in mind we construct our adversaries follows. First it corrupts all nodes outside the challenge graph and it just outputs one if it detects any mistakes in this part. This is just to enforce the reduction to embed the challenge key within the challenge graph. Then the adversary uses its unrestricted computational power recall we are interested in fully black box reductions so we can even construct inefficient adversaries to prove our lower bounds. So this adversary uses its computational power to simply break all these encryptions and check for each edge whether this is a real edge or a fake edge. And whenever the edge is inconsistent then it considers this edge as being pebbled. So the adversary sees some pebbling configuration on the graph and he decides to output 0 or 1 depending on some good predicate. We will next define this good predicate. Consider the following reversible edge pebbling game. This game works in several rounds and in each round one can place or remove a pebble on an edge if and only if all the edges incident on the source of this node are already pebbled. So these rules capture exactly the potential ways a reduction could behave. We then consider the configuration graph with respect to this pebbling game. This configuration graph has as a set of nodes all possible pebbling configurations on the challenge graph and there is an edge between two configurations. If one configuration can be obtained from the other configuration within one valid reversible edge pebbling move. And since we consider reversible edge pebbling we end up with an undirected graph. We then define the set of good pebbling configurations by a cut of this configuration graph. And this cut should separate two special configurations namely the empty configuration which represents the GSD game where the adversary receives the real key and the configuration where all edges incident on the challenge node are pebbled but all other edges are real which represents the GSD game where the adversary receives a random independent key as a challenge. For our lower bounds we are then interested in the cut set which consists of those pebbling configurations which lie at the border between good and bad. And these are exactly the configurations where the reduction can embed a challenge and exploit the adversary's advantage. Let me now introduce our combinatorial game. The builder-pebbler game is a two-user game that goes in several stages. In the beginning the builder and the pebbler both initialize some graph structure and they also know some cut set x which defines the winning condition for the pebbler. The builder then queries edges in the graph and for each edge the pebbler decides whether to place a pebble or not to place a pebble. The game ends by the builder identifying a challenge node which must be a sink in the graph and the pebbler wins if the pebbling configuration on the graph lies in the cut set. In many cases this builder will be restricted namely it cannot query arbitrary acyclic graphs but only graphs that lie in a certain family. We are now interested in constructing specific builder strategies and defining specific cut sets so that the success probability of the pebbler is very low. Let's consider the following builder strategy for trees. First the builder makes q squared plus q many queries forming a tree without degree q. Then it chooses uniformly at random one of these q many bunches of q leaves and extends these leaves without degree q again. Then again it chooses one of the q many bunches of q leaves of high steps and extends these and so on. And in the end the builder chooses uniformly at random one of the q squared leaves of high steps as the challenge. So here in blue we mark the challenge graph which is a path. Now to define an appropriate cut we use the following lower bound for reversible edge pebbling on a path namely to place a pebble on the last edge in a path of length n which requires log n plus one many pebbles. Thus we define our cut as the set of pebbling configurations that are reachable with log n many pebbles. Thus the goal for the pebbler is to place exactly log n many pebbles on the challenge path but no pebbles outgoing from any nodes outside the path. This is recalled because our adversary will corrupt all nodes outside the challenge graph. So let's play this game between the builder and the pebbler. The builder makes it even a bit easier for the pebbler and sends all these q squared plus q many queries at once. Now the pebbler knows that the root of this tree will end up in the challenge graph. So it will be safe for the pebbler to place pebbles on all edges outgoing of the root node. However if it also wants to place a pebble on the second edge in the challenge graph it has to guess and with probability one over q it will guess correctly which node ends up in the challenge graph. Then the builder will again extend this graph and make q squared many queries and maybe this time the pebbler is not interested in placing a pebble but maybe it wants to place a pebble again on the fourth edge and when the builder makes these q squared many queries of edges at depth 4 the pebbler has to make a guess and with probability one over q will correctly guess. So in the end of the game the probability that the pebbler managed to place exactly log n many pebbles on the graph and no pebbles on edges outgoing from nodes outside the challenge path can be upper bounded by one over q to the log n minus one. If we set the parameter q and the depth n about the same size then we get an upper bound n to the minus log n. We now want to use this combinatorial upper bound to derive our cryptographic lower bounds. For this we have to construct an ideal secret key encryption scheme and furthermore we have to construct a threshold adversary for GSD that simulates the above builder strategy and we then prove that for any straight line reduction there exists a pebbler against the builder b such that any upper bound on the reduction security loss can directly be translated into a lower bound of the pebbler's advantage. We then use our upper bound on the pebbler's advantage for this builder to establish our lower bounds for GSD on trees. Namely any straight line reduction proving security of unrestricted adaptive GSD based on the NCPA security of the underlying secret key encryption scheme loses at least a super polynomial factor n to the log n in the number of uses n. Having established our lower bound for GSD restricted to trees I will now give you a very brief overview of our lower bound for tree cam. The continuous group key agreement protocol tree cam is based on a binary tree each node is associated with a public key pair the root is associated with the group key and each leaf is associated with a user. The edges then represent encryption of secret keys under the source public key. Again I want to mention that this simplifies the scheme but it is enough for this exposition to allow for forward secrecy and post compromised security tree cam has an update mechanism that allows a user to refresh its date and heal from corruption. Such an update works as follows. If Alice wants to update she chooses fresh keys along the path from her leaf to the root and then encrypts these keys to the co-path nodes. This allows every user in the group to process her update and learn the new group key. At the same time we see that this allows to produce subgraphs in the tree cam tree that have high out degree. Thus our idea is to construct an adversary that embeds the builder's tree structure within the tree cam tree but note that the tree cam tree depth is bounded by log m where m is the group size which implies that we can only prove a lower bound m to the log log m. For our adversarial strategy however it is crucial that the real server is not trusted namely the adversary can force parties into inconsistent states by sending them different update messages. Finally let me give you a brief overview on our lower bound for the GGM PRF as a prefix constrained pseudo random function. The GGM PRF is based on a binary tree as well where the root is associated with the secret key. The edges now represent a length doubling PRG and the evaluation of the function on string 010 for example is marked in green. So it is the key associated with the respective node. This is not only a pseudo random function but can also be used as a prefix constrained pseudo random function where the constrained keys are just internal keys in the tree. For example the constraint key for prefix 00 would be the key k00 which allows to evaluate the function on all strings that have prefix 00. In a security game now the adversary can make constrained key queries and evaluation queries but clearly it must not query constrained keys for prefixes of the challenge query. The idea now is to construct an ideal PRG scheme and an inefficient adversary that again embeds our builder tree with high out degree within this exponential sized binary tree. The intuition here is that in the first round the adversary does one evaluation query for each prefix of length 2k for some parameter k. Then in the next round chooses uniformly random a string of length k which it fixes and then makes one evaluation query for each prefix of length 3k where the first k bits are fixed. It proceeds like this and in this way embeds a tree structure within this exponential sized binary tree. I do admit that there is quite some details on how to extract a pebbling configuration on this graph but this is outside the scope of this talk. The lower bound we obtain is super polynomial and matches exactly the best known upper bound namely n to the log n where n is the input size and the depth of the graph. For all other results I want to refer to the full version of our paper which you can find on eprint. With this I want to conclude my talk where we initiated the study of lower bounds on the loss in security against adaptive adversaries for certain multi-round games that capture several existing constructions. Quite some of our results only hold for restricted class of reductions some hold only for non-revinding reductions and some have an even stronger restriction namely the only hold for oblivious reductions. So an exciting open problem is where the one could strengthen our lower bounds to capture even revinding or non-oblivious reductions or in the other direction if one could use these techniques to overcome our lower bounds here I want to mention that all known upper bounds actually imply oblivious reductions so it is not clear how non-obliviousness or revinding could be used to obtain better upper bounds. The only case where we know that no better upper bounds can be established is the case of proxy re-encryption on complete tags where our lower bounds hold for arbitrary fully black box reductions. Another interesting question would be of course to come up with other multi-round games which are captured by the build-up pebbler game. Finally a very intriguing open problem is where the one could use pebbling lower bounds to prove lower bounds on the loss in adaptive security in other settings namely in constant round games where the graph structure is revealed in one step such as for ABE or garbling. We do have one positive result here that we presented at crypto this year but only for the specific construction of Yao's garbling and it turned out that we require very different techniques here. So these settings are not captured by the build-up pebbler game. With this I want to conclude and thank you very much for listening to this talk.