 Hi, I'm Benedict and in this video I want to give you an overview on our work on the scaling behavior of public key interaction. This is joint work with Federico Jackon and Iker Kels. Let's begin with a rough overview. I will first give an introduction into multi-incense security and talk about why I think it is an interesting topic to think about and also discuss in greater detail how we model it. We will move to our results on the scaling behavior of hashed Elgamal key encapsulation and finally discuss the results, the technical results which are at core of this that is new generic group lower bounds on the hardness of solving multiple instances of certain CDH type problems. So let's start with multi-incense security. Typically in cryptography when we model security of a scheme we require that it should not be possible for an adversary to break even a single instance or to compromise a single user of that scheme. We sometimes also look at a bit more general definition that is multi-user security so in this case it should not be possible to compromise again a single user out of multiple possible targets. Now in this work we look at a bit different definition that is multi-incense security. So here an attack is only considered to be successful if an adversary is able to compromise all out of end users of a scheme. And what we are interested in in particular is the question how much harder performing such a multi-incense attack is than compromising a single user. We could visualize this as follows. So on the x-axis of this graph we have a number of compromised users and on the y-axis we have the computational effort or the running time required to perform an attack on the amount of users. Now let's assume that compromising one user requires effort t. Then in the best case compromising one user does not help at all with compromising the next user. So in this case compromising end users would take end times the effort and we would have this nice linear scaling behavior. However it could also be that by breaking one instance of the scheme we somehow managed to find some information which now enables us to come to break additional instances of the scheme with basically no effort at all. So in this case we would have this constant scaling behavior. And what we are interested in in our work is the question what is the actual scaling behavior of a scheme. At first glance this might sound a bit surprising to you because of course in theory we would assume that the parameters of our scheme are chosen in a way such that even breaking a single instance of the scheme is not possible and then of course in particular it's impossible to break several instances of the scheme. However unfortunately in practice it's quite common that schemes are used without dated parameters. And now in that case it might actually be possible for adversaries with nation state capabilities to compromise single instances of the scheme. And now the scaling behavior of the scheme actually might make the difference between single users being compromised or full blown mass surveillance. And this is actually not only a theoretical concern but this is something which has been exploited in the well-known logjam attack by Adryan at all. So this is an attack on TLS implemented with hot dated parameters. So concretely subgroups of finite fields with prime length 512 bits. And here the authors were able to perform some massive preprocessing which was only dependent on the used group and then afterwards attacking particular instances of the scheme could be done with comparatively little effort. Concretely breaking one million instances of the scheme only took twice the effort of breaking a single instance. So in this case in the picture from before this logjam attack would be very close to this worst case scenario of a constant scaling factor. So in this work we aim to make this phenome of scaling behavior measurable from a theoretical perspective and to this end we adapt multi instance security to the schemes we consider that is key encapsulation mechanisms and define the scaling factor of schemes which measures this scaling behavior. And when then we turn to a concrete scheme that is hashed algorithm key encapsulation we consider it for different parameter settings which turn out to have an influence on the scaling behavior and are able to compute the scaling factor in some idealized models concretely the random oracle and generic group model. And I will talk now in more detail about the first point that is how do we model multi instance security and define the scaling factor. To this end I will give a short reminder on the scheme we look at that is key encapsulation mechanisms. The chem consists of four algorithms. First the parameter generation algorithm which is used to set up global parameters of the scheme and those parameters are supposed to be used by all users of the scheme. But individual users then can use a key generation algorithm to set up key pairs consisting of the public and secret key of the scheme and can use the encapsulation algorithm which on input of the parameters and a public key outputs a pair consisting of the so-called encapsulated key k and the ciphertext which is an encryption of this encapsulated key. The intuition being that we want to use this key k as the encryption and decryption key of a symmetric encryption scheme. Finally we have a decryption algorithm which can be used to recover encapsulated keys from ciphertext when given access to the secret key. I will now explain how we define multi instance security and to this end actually I will start with a brief reminder on the typical single instance security game for key encapsulation mechanisms. So the intuition behind this game is that the encapsulated keys k should look random for adversaries which only have access to the ciphertext and the public key but not the secret key and this is captured by the following game. So in the game we set up a challenge bit b then generate a set of parameters as well as a key pair consisting of the public and the secret key and then we set up a challenge for the adversary that is we run the encapsulation algorithm to set up a pair consisting of a ciphertext and corresponding encapsulated key and if our challenge bit b is 0 we would replace this key with a uniformly random one. Now the adversary gets as input the parameters, the public key, the encapsulated key and the ciphertext and is supposed to figure out whether this challenge bit was 0 or 1. And additionally it also has access to a decryption oracle which on input of a ciphertext returns a decryption with respect to the generated secret key and of course we require that the adversary must not use this decryption oracle on the challenge and the advantage of the adversary is simply how much better it does than simply guessing. So how do we modify this game to now capture multi-instance security whereas a reminder that the adversary is supposed to only win the game if it is able to break all out of n instances of the scheme. So I marked the changes in green so as you can see now we no longer generate a single challenge bit but instead we generate a vector consisting of n challenge bits. Again only one set of global parameters is set up and now we essentially generate one challenge for every user with respect to challenge bit bi that is for each of the n users we generate a pair consisting of the secret key and public key and then set up this challenge consisting of a ciphertext and encapsulated key where this encapsulated key is replaced with something uniformly random if challenge bit bi is 0. So the adversary gets as input the parameters and the vectors of public keys encapsulated keys and ciphertexts and again has access to a decryption oracle which you can use now for every of those n users and again we require that it should not use the decryption with respect to user i on the ife challenge. Its output is again only one bit and the adversary wins if it is able to correctly guess the exclusive or over all those n challenge bits. So this multi-instant security notion is an adaption of the XOR notion introduced by Bellarie Rittenpad and Tiesau and the intuition behind it is that as long as one of those challenge bits looks uniformly random to the adversary so will the exclusive or this means in order to win with a noticeable advantage in this game it actually has to compute or to break all instances of the scheme. You now might be surprised that we do not use a different security notion where the adversary simply has to compute all n challenge bits but actually this was already considered in the paper by Bellarie Rittenpad and Tiesau and it turns out that this actually does not capture multi-instant security because there exist generic attacks which are able to achieve a high advantage in those games without actually breaking all n instances of the scheme. Okay so this is multi-instant security how we find it in the paper and actually we look at a bit more general setting in the paper where the adversary has to break m out of n instances of the scheme. So as I said before in this work we're interested in measuring the scaling behavior of the scheme so in the question how much harder it is to break n instances of the scheme compared to breaking one instance of the scheme and to this end we define the scaling factor which is defined as the running time of the fastest adversary which is able to break n instances of the scheme divided by the running time of the fastest adversary breaking one instance of the scheme and for this talk I will require these those adversaries to succeed with success probability one but in the paper we actually give a generalized version of this definition and using the security definition from before we are actually able to confirm the intuition from the illustration I had in the beginning of this video that is the scaling factor does indeed lie between one and n and now we try to answer the question whether we are able to determine the scaling factor for concrete schemes and we consider the hashed-algamal scheme and I'm going to talk about our results on the scaling behavior of hashed-algamal key encapsulation in more detail now so now give a brief overview on our results of the scaling behavior of hashed-algamal key encapsulation we rely on some idealized models which as we will see in a minute is unfortunately necessary concretely we use the random oracle model which is standard for security proofs of hashed-algamal key encapsulation and additionally we model the used groups as generic groups which is assumed to be meaningful for elliptic curves elliptic curve groups as you can see we consider the scheme for three different parameter settings which we also call granularity and they essentially differ in how much information of the used group is shared in between users of the scheme so the first setting is we call high granularity and it corresponds to how hashed-algamal is typically used in practice that is all users rely on a one standardized group with a fixed group generator in this case the user's key pairs consist of a group element and the corresponding secret key would be the discrete logarithm with respect to the one group generator group generator and as you can see in this case the scheme does not scale optimally but also not horribly so it has a scaling factor of square root of n which means that in order to break an instances of the scheme you have to put in square root of n times the effort the second parameter setting we look at is medium granularity so in this case the users still rely on one group but now every user uses his own group generator which is part of the secret and public keys and as you can see switching to this medium less efficient medium granularity version does not help in improving the scaling factor which is again square root of n and finally we also look at the low granularity setting so in this case there are no parameters at all but instead every user uses his own group which we model as independent generic groups as you can see in this case actually the scheme scales optimally with a scaling factor of n however i should remark that this does not seem to be too practical as now every user would have to generate his own group as part of the key generation which is requires quite some effort and also introduces new attack possibilities so how do we come up with those results we try to compute the scaling factor of hashtag amal that is the time required to break n instances compared to the effort required to break one instance so let's first look at the upper bound which is actually there's not too much to do because there are known generic algorithms which are able to break several instances of the scheme so those are a variance of for example of the baby step giant step algorithm this allows us to bound the time required to break n instances and also there are known generic group lower bounds on breaking one instance of hashtag amal which gives us the desired results probably more interesting is the lower bounds so in this case again we can bound the time required to break a single instance of the scheme from above by simply using known generic algorithms as for example baby step giant step and so the technical or the most technical contribution of this work is probably the introduction of a new generic group lower bounds on the hardness of breaking several instances of hashtag amal in the security definition we you've just seen and i will now give you a very high level overview on how we achieve this so in our first step we show that breaking n instances of hashtag amal in the random oracle model is as least as hard as breaking n instances of the gap of the gap cdh gap computational divi helman problem and this is a fairly easy adaptation of the standard single instance random oracle model proof of hashtag amal and then in a second step we use the algebraic group model by fuchs power kills and loss to show that breaking n instances of the gap cdh problem is at least as hard as breaking n instances of the gap discrete logarithm problem so more precisely we show that every generic group lower bound for the gap discrete logarithm problem carries over to the gap cdh problem and then finally in the last step we derive new generic group lower bounds on the hardness of computing and instances of the gap discrete logarithm problem so as you have just seen at core of our results for hashtag amal our new generic group lower bounds on the hardness of solving several instances of cdh type problems and i want to spend the rest of this video to talk about those results in greater detail so let's begin with definitions of those problems the first one is the multi instance discrete logarithm problem which is pretty much what it sounds like that is we set up n group elements uniformly at random and then the adversary on input of those group elements has to recover all discrete logarithms so what i have here on the slides is the high granularity version where all of those challenges are defined with respect to a single group and group generator but this can easily be adapted to medium and low granularity as well now the second problem is the multi instance gap discrete logarithm problem where the adversary again gets n delock challenges and has to solve them all the difference being here that additionally it has access to a ddh so a decisional diffie helman oracle that is on input of three group elements x y and z this oracle answers with one if those three group elements form a diffie helman tuple with respect to the group generator or with zero otherwise finally the third interesting problem for this work is the multi instance gap cdh problem again the adversary has access to a ddh oracle but now it gets its input and cdh challenges that is group elements g to the x i g to the y i and it has to solve them all so it has to compute all g to the x y times y i that will now give an overview on our our bounds for those problems there were already some bounds some generic proof bounds known for those problems specifically for the multi instance discrete logarithm problem concretely there is a work by aramjoon which gives a bound of square root of n times p steps in order to solve any instances of delocks in the high granularity case as well as a work by garay et al which proof the same bound in the low granularity setting however unfortunately we do not know how to prove the hashed algamal scheme which we are interested in in this work secure based only on the delock problem so in this work we compute several new generic group lower bounds so in a first we show that in the high and medium granularity case actually the bound carries over to the multi instance gap discrete logarithm as well as the multi instance gap cdh problem and furthermore we are able to actually improve the known bound for the multi instance delock problem in the low granularity setting by showing that both gap delock and gap cdh actually in order to break any instances require in the generic group model n times square root of p steps and i should mention that all of those bounds are optimal as there exist corresponding generic algorithms and i now want to spend the rest of the talk to give a very rough intuition on how we compute those bounds actually forgot something we we also consider a generalization of the delock problem which we call the polycheck delock problem of degree d so here the adversary has access to a more general decisional oracle so you could see the dh oracle in the gap problems as basically an oracle which evaluates a certain equation of degree two in the exponent and now in this polycheck delock problem we give the adversary access to an oracle which it can use to compute arbitrary polynomial equations up to degree d in the exponent and it turns out that in this case the bound decays with a uh with square root of d so how do we derive those lower bounds i will first give some intuition on the gap discrete logarithm problem for high granularity so we take a similar approach to the work by our immune that is we reduce the gap delock problem in the generic group model to a geometric search problem the so-called search by hypersurface problem of degree two so in this problem we have a space uh zp to the n where n corresponds to the the is the dimension of the space and corresponds to the number of instances so in in this example i drew here it's it's two and the goal of an adversary is to compute a to to find a point x which has been sampled uniformly at random in this space and to do so it can ask so-called hypersurface queries that it is it can specify hypersurfaces up to degree two and then as response the answer is going to be one marketing green if the point lies on hypersurface and zero if this is not the case so in this example for example now the adversary would ask for this circle x is not on the circle so the answer would be no and now maybe with this ellipsoid x actually lies here so the adversary would now know that x is one of those four points okay so this gives you some intuition on the problem and it turns out that a reduction playing this search by hypersurface problem can actually be used to perfectly simulate the multi-incense gap discrete logarithm problem in the generic group model and this means in order to derive a generic group lower bound on gap deloc it's actually enough to find a information theoretic bound for this search by hypersurface problem so overall this is quite similar to Yoon's approach however due to the higher degree we have in this case we actually have to work with commutative algebra compared to linear algebra which adds some technical challenges the bounds for low and medium granularity are simply derived from the high granularity results concretely for the medium granularity case we derive it from the n instance bound and for low granularity from the one instance bound finally how do we carry over this bound to the gap computational Diffie-Hellman problem again i will first discuss the high granularity case where we have a single group and a single generator so here we rely on the algebraic group model by Fuchs-Bowlkels and Loss and we give a reduction that shows that any generic solver of n instances of gap CDH can actually be transformed into a generic multi-incense gap deloc solver which means that yeah so again gap CDH is at least as hard as solving gap deloc and on a very high level the algebraic group model tells us that without loss of generality we may assume that our reduction has access to certain information that we are able to exploit in order to extract those deloc solutions from solutions to the CDH problem and then similar to the deloc case the bounds for low and medium granularity are derived from the high granularity bound so to conclude this video i will now give a short summary so we define and this work the scaling factor of schemes which measures the scaling of a scheme security that is how much harder it is to break n instances of the scheme compared to breaking one and we then proceed to compute the scaling factor for variants of the hashtag amal a key encapsulation mechanism in the generic group model and to this end we prove several new generic group lower bounds on the hardness of solving several instances of different CDH type problems maybe some interesting future directions so in this work we look at chems but it might also be interesting to see whether in this multi-incense security setting it's possible to come up with more complicated reductions so for example consider the chem dem paradigm used to construct public key encryption from a chem and a dem and there's a second point it might be interesting to see whether we can also get results when we consider adversaries which perform pre-processing as there have been some new results on generic group lower bounds with pre-processing so thank you for your attention and goodbye