 Hi, I'm Lior and I will talk about the Decisional Algebraic Group Model or Algebraic Distinguishers from discrete logarithms to Decisional Uber Assumptions. This is joint work with Gil Segev. The starting point for this talk is the Generic Group Model. This is a very useful idealized model aimed for capturing computations in cyclic groups. Roughly speaking, the Generic Group Model captures algorithms which do not exploit a representation of group elements in any way. This is typically captured by withholding the elements representation from algorithms and somehow restricting their access to the group to performing the abstract group operation and to checking equalities between pairs of group elements. For example, in Schup's Model, this is achieved by replacing the representation of group elements with random encodings and outsourcing the group operation to a dedicated oracle. Over the years, the Generic Group Model has proven to be a very useful one with several important benefits. It captures a natural and wide class of algorithms, and indeed many known algorithms fit within this model. Concretely, the best known algorithms for solving discrete logarithm-like problems in elite to curve groups are generic. More generally speaking, the Generic Group Model enables one to derive very neat information theoretic lower bounds, and in some cases, this is also the only proof of security that we know of. On the other hand, the Generic Group Model is not without its limitations. First, it does not naturally establish any sort of hierarchy of hardness assumptions, since any assumption in this model either holds information theoretically or is false. Secondly, we do know of very important algorithms which are non-generic for very basic and fundamental tasks. And more generally, it seems that withholding the representation of group elements from the adversary is a somewhat unrealistic scenario. With these limitations of the Generic Group Model in mind, Fuchs-Bauer-Kilten-Loss put forth the Algebraic Group Model, or the AGM for short. This model lies between the Standard Model and the Generic Group Model. That is, it is more restrictive from the Standard Model, but it assumes far less than the Generic Group Model. The main idea behind the AGM is to distill one key feature of generic algorithms, which already allows for meaningful reductions between computational problems, while at the same time significantly weakening the other restrictions of the Generic Group Model. Concretely, unlike in the Generic Group Model, algebraic algorithms do receive the representation of group elements and may use it as they wish. The restriction is that whenever an algebraic algorithm outputs a group element, it must produce alongside it an algebraic explanation, so to speak, explaining how this group element was computed. In more detail, let G be a group of prime-order P, and consider an algebraic algorithm A. A receives L group elements, X1 through XL as input, and then it outputs a group element Y. Alongside its output, A also outputs a vector W of L integers in Zp. This vector W serves as an algebraic explanation for how A came up with the element Y. Concretely, W is the representation of Y in the basis of X1 through XL. That is, Y is equal to the product of each Xi raised to the power of WY, where WY is the ith entry of W. It should be mentioned that the model of Fuchsbauer et al is inspired by previous works which considered different variants of algebraic reductions, rather than algebraic adversaries. Ok, so let's see an example of how the AGM might be used. Consider the computational Diffie-Hellman problem, or CDH for short, in which the adversary is given a generator G for the group, along with two group elements, G to DX and G to the Y, for randomly chosen integers X and Y in Zp, and the adversary needs to compute G to DXY. Fuchsbauer Kielsen-Loss showed that breaking CDH algebraically is equivalent to breaking the discrete logarithm assumption in the group. It is straightforward that breaking discrete log enables one to algebraically break CDH. As for the other direction, assume an algebraic adversary A, which receives G, G to DX, denoted by capital X, and G to the Y denoted by capital Y. The adversary A outputs a group element Z, along with an explanation vector W. By the definition of the AGM, W must satisfy that Z is equal to G raised to the power of W1, times X raised to the power of W2, times Y raised to the power of W3. The idea is then to have the reduction, which is trying to break discrete log, plant the secret exponent of the discrete log instance instead of either of the exponents X and Y. I will not get into further details at this point, but we will see an extension of this idea later in this talk. Now let's see what happens if we try to apply the AGM in order to deduce something about the hardness of the decision if you have a problem. In this problem, the adversary receives as input a generator G, two group elements G to DX and G to the Y as before, and an additional group element Z. The adversary has to distinguish between the case in which Z is G to DXY, and the case in which Z is a randomly chosen group element. The output of the adversary is a bit B, indicating its decision regarding the distribution from which the input was drawn. Observe that in this problem, the distinguisher A outputs only a single bit and no group elements. Hence, in this case, the algebraic group model of Fuchsbauer et al coincides with the standard model. Namely, if we want to reduce the hardness of the DGH problem to the hardness of a different problem in the group, then the model of Fuchsbauer et al does not provide any additional power for doing so beyond that of the standard model. It is not hard to see that this is true for any problem in which the adversary does not output group elements, which includes in particular all non-interactive decision problems. That is, all problems in which the adversary has to distinguish between two a priori fixed distributions. Fuchsbauer et al mentioned this limitation of their model in their paper and explicitly leave it as an important open question to generalize the algebraic group model to enable reasoning about decisional problems as well. With this in mind, we present the decisional algebraic group model or the DAGM for short. This model is a generalization of DAGM and it allows to reason about algebraic distinguishes, which output only a decision bit and no group elements. Though our model generalizes DAGM, I would still argue that this is a reasonable strengthening of the model. The DAGM still lies between the generic group model and the standard model. That is, it assumes considerably less than the generic group model. In particular, it still allows the adversaries to use the representation of group elements in any way they see fit. In the paper, we also consider extensions of the DAGM to groups equipped with a multilinear map, but I will not have the time to cover these extensions in this talk and you can see the paper for details. We then turn to show that our model is indeed useful for reasoning about decisional problems. Within our model, we prove the equivalence of strong decisional problems and the discrete logarithm problem are a simple variant thereof. Concretely, we show that the decisional Diffie-Hummond problem for algebraic distinguishes is equivalent to the discrete logarithm problem. We then generalize this result to show that in fact, the decisional K-linear problem introduced by Schacham is equivalent within our model to the discrete logarithm problem even in groups with a K-linear map. Finally, we show that the decisional Uber problem in bilinear groups presented by Bonnet, Boyen and Go is equivalent in our model to the QD log problem which is a parameterized higher-order version of the discrete logarithm problem. Due to time restrictions, I will not be able to talk about these last two results in this talk, but I encourage you to see the paper for details. Before presenting our model, I want to quickly talk about some related work. Fuchsbauer-Kilten-Loss do consider in their work some kind of an algebraic distinguisher. Concretely, they show that the CCA1 security of Al-Gamal encryption is algebraically equivalent within their model to a parameterized variant of DDH. However, their reduction crucially relies on the fact that the Al-Gamal adversary issues queries which are composed of group elements, and hence their model can be used in order to extract some form of algebraic knowledge from these queries. They do not extract any sort of algebraic knowledge directly from the advantage of the adversary in the CCA1 security game, and hence the reduction is to another decisional problem. As we will see, our model does enable to extract algebraic knowledge from the distinguishing advantage of algebraic distinguishers, and to relate the hardness of decisional problems, such as DDH, to the hardness of computational problems, such as discrete log. Bauer-Fuchsbauer-Loss also recently reduced the uber problem of Bonnet-Boyen and Go in bilinear groups to the QD log problem. Our result differs in that it considers the decisional variant of the uber problem, whereas their work considers the computational variant. However, the techniques used in both works, though similar in spirit, are not identical, and the reduction of Bauer et al is tighter. Finally, Boulans and we presented the knowledge of orthogonality assumption. You can see the paper for a detailed comparison of our model and their assumption, but I will say that our model is somehow similar in spirit to their assumption, but our model is much more general and is aimed to capture arbitrary assumptions, whereas their assumption is more specifically tailored for their needs. Okay, so the rest of this talk is organized as follows. We will start by presenting the decisional algebraic group model, or the DAGM. Then we will see an informal explanation as to why generic distinguishers are also algebraic, meaning that the DAGM is indeed less restrictive than the generic group model. We will continue to show a reduction from algebraically breaking the DH to breaking the discrete log, as an example of how the DAGM might be used. Finally, we would conclude with some closing remarks. So let's start with presenting our model, but before that, let's see together what it is that we want the model to satisfy. On the one hand, in order to keep with the spirit of the algebraic group model, we would like our model to be a weakening of the generic group model, and concretely to allow adversaries to receive the explicit representation of group elements. On the other hand, of course, we would like the model to be stronger than the standard model. That is, we would like it to enable meaningful reductions beyond those known in the standard model. The way in which Fuchsbauer et al. achieved these goals in their algebraic group model is by requiring that the adversary provides an algebraic explanation for how it computes its output. As we've seen, in the case of the DAGM, this explanation is a vector, describing how the output group elements are computed from previously received group elements. An important property of this algebraic explanation is that it is indeed extractable from generic algorithms, and hence the model induced by requiring this explanation is less restrictive than the generic group model. With this in mind, our goal is to extend this rationale to distinguishers whose outputs are decision bits and are not comprised of group elements. As a first attempt to define the decisional algebraic group model, consider the following requirement. Whenever an algebraic distinguisher accepts by outputting the bit 1, it must also output, as its algebraic explanation, a zero test which is passed by the provided input. So if the input to an algebraic distinguisher a is x1 through xl, and a outputs 1 as its decision bit, it must also output a vector w of integers in zp. This vector w induces a linear zero test in the exponent and this zero test should be satisfied by the input elements x1 through xl. That is, the product of the xi's raised to the corresponding entries of w, wy, should be equal to the identity element of the group. On the face of it, this definition already captures the fact that the only information available to a generic algorithm about the input group elements is the equality pattern among them and among the elements which it computes via the group operation. Any such equality naturally corresponds to a linear zero test in the exponent and we will get back to this point in more detail in a few slides. The first problem that comes to mind when seeing this definition is that it is too weak. The adversary can always output a vector w which is the all zeros vector. Such a vector will severely satisfy the condition required by the definition that we just saw, but obviously does not reflect any sort of algebraic knowledge learned by the adversary regarding the input. A naive fix for this issue is to simply require that whenever the adversary accepts the explanation vector w must also be non-trivial. That is, it must contain at least one non-zero entry. However, this requirement is now too strong as it is not descriptive of generic group algorithms. A generic algorithm can always accept without so to speak, having knowledge of a non-trivial vector w such that the product of X1 to W1 through XL to WL is the identity element. Consider now a more subtle definition which requires that if an algebraic algorithm A distinguishes between two distributions D0 and D1 with advantage epsilon then there exists some bit B such that when A is executed on an input drawn from Db it outputs a non-trivial vector w satisfying the multiplicative condition from before with probability at least epsilon. This definition already suffices for some applications and as we will see later this definition is indeed the weakening of the generic group model. However, note that even if W is non-trivial it can still be a bad explanation for the decision of an algebraic distinguisher. Consider for example the DDH problem in which the adversary has to distinguish between D0 and D1 for uniformly random X, Y and Z and consider a vector w which is not the all-zero vector but has a zero entry in one of its last three entries. In this case, when we project D0 onto the support of W meaning we ignore the group elements which correspond to zero entries of W we get exactly the same distribution that we would get if we project D1 onto the support of W. Intuitively, this means that the zero test associated with the vector w cannot be used to distinguish between the two distributions. This situation also makes the definition inapplicable for some of our applications. This leads us to our full-fledged definition of algebraic distinguishers. To this end, we first present the notion of good vectors. Consider an algorithm A which receives as input L group elements and outputs a decision bit along with a vector w of integers in Zp. For a distribution D over L tuples of group elements we denote by D sub sub w the distribution obtained from D by restricting it to the support of W meaning the process of sampling from this distribution is defined by first sampling from D and replacing all group elements whose corresponding entry of W is zero with some unique erasure symbol. We say that a vector w is good for a pair D0, D1 of distributions over L tuples of group elements if the distribution D0 restricted to the support of W is distinct from the distribution D1 restricted to the support of W. Equipped with the definition of good vectors we can now define our notion of algebraic distinguishers. As before consider an algorithm A which receives as input L group elements and outputs a decision bit along with a vector w of integers in Zp. We say that A is algebraic if two requirements hold. First it should always be the case that the product of X1 raised to the power of W1 through XL raised to the power of WL is equal to the identity. Second for any two distributions D0 and D1 over L tuples of group elements if A distinguishes between them with advantage epsilon and in time at most T then there must be a bit B for which the following holds. When A is invoked on an input sampled according to dB the probability that it outputs a vector W which is good for the pair D0 and D1 is at least epsilon over T squared. In other words with probability at least epsilon over T squared the vector W is such that if we restrict D0 and D1 to the support of W we get two different distributions where the probability is over the sampling of the input from dB and the randomness of A. At the moment it might be unclear where the term T squared comes from but hopefully this will become clear in the coming slides. In the paper we consider extended versions of this definition. So first of all we define the DAGM for general interactive games in which adversaries can receive multiple messages from the challenger and moreover these messages may contain additional information other than just group elements. Secondly we also consider possible strengthening of the requirement that W is good with probability at least epsilon over T squared. These strengthening are justifiable in our view since they still significantly weaken the generic group model but we do not need them for the applications that we consider in this work. Let's now see why generic distinguishers are also algebraic according to our definition. So here's the claim for every generic algorithm A there is an algebraic algorithm B where algebraic is according to our definition from previous slides such that the advantage of B in distinguishing between any two distributions is the same as the advantage of A and B runs in essentially the same time as A. In high level this claim is based on two very well-established observations regarding the generic group model. First is the observation that if A which is a generic algorithm receives L group elements X1 through XL as input and computes a group element Y then anyone observing A's oracle queries can extract a vector W such that Y is equal to the product of the XIs whereas to the power of the corresponding entries of the vector W. This observation is the one underlying the original algebraic group model of Volkswagen Rital. The second observation is that the only information at the disposal of A regarding the input group elements comes from the equality pattern among these group elements and the group elements which A computes. Hence informally in order to distinguish between two distributions with advantage epsilon then with probability at least epsilon there has to be some equality that arises when the input is drawn from one distribution and this equality arises with different probabilities in both distributions. Using these two observations we can define the algebraic algorithm B guaranteed by the claim. B first simulates the generic group oracle to A and outputs the same output as A. This step depends on the exact generic group model used but is fairly standard and straightforward. B also keeps track of all the pairs of equal group elements that arise throughout the computation of A. In order to make sure that this list is non-empty we include in it all pairs which contain the same group elements in both coordinates. Note that if A runs in time t then there are at most t squared such pairs. Finally in order to compute the vector w B samples one pair yj yk of equal group elements uniformly from the list of all pairs of equal group elements. By the first observation from the previous slide B can compute vectors u and v such that yj is the product of xi raised to the power of ui for i running from 1 to L and yk is the product of xi to the vi. The vector w is then the difference between u and v. So this again is how B decides on w and since yj is equal to yk it indeed holds that the product of the xi's raised to the power of the corresponding w's is equal to the identity element. Now fix any two distributions d0 and d1 and let epsilon be the advantage of A in distinguishing between them. Very informally as we observed before with probability epsilon there has to be some equality that arises when the input is drawn from one distribution and this equality occurs with different probabilities in both distributions. Hence for at least one of the two distributions when the input is sampled from this distribution with probability at least epsilon there is a pair yj and yk such that the vector w which they induce is good for d0 and d1. Recall that this means that d0 and d1 remain distinct even when restricted to the support of w. Since there are at most t-squared pairs of equal group elements B chooses this particular pair with probability at least one over t-squared. Now let's see a simple example of how to use the DAGM. Consider the Decisional Diffie-Hammann Problem or DDH for short. We will show how to reduce the task of algebraically breaking the DDH problem within our model to the task of computing discrete logarithms in the group. Assume the existence of an algebraic distinguisher A which receives as input a DDH instance. A generator G, group elements capital X and capital Y which are G to the X and G to the Y for randomly chosen integers X and Y in Zp and the group element capital Z which is G to the XY plus BZ where little Z is a randomly chosen integer in Zp. That is, if B is 0 then Z is G to the XY and if B is 1 then Z is a random group element. A then outputs a bit B' trying to guess the bit B and since A is an algebraic algorithm then according to our definition it also outputs a vector W of 4 integers in Zp. This vector W has to satisfy the condition that the product G to the W1 times X to the W2 times Y to the W3 times Z to the W4 is equal to the identity element. Moreover, assume that A has advantage epsilon in guessing B correctly and that it runs in time t. Since A is algebraic this means that there exists a bit B such that when Z is sampled to be G raised to the power of XY plus BZ the probability that the vector W is good for the pair of DTH distributions is at least epsilon over t squared. Recall that according to our definition being good for the pair of DTH distributions means that when restricting both of them to the support of W we still have two distinct distributions. Our goal now is to use A in order to construct an algorithm B which successfully computes discrete logarithms in the group. The discrete log algorithm B guesses input degenerator G and the random group element capital S and its goal is to compute little s which is the discrete log of capital S with respect to degenerator G. As a first idea consider what happens if we plant little s instead of the exponent X in a random DTH instance and then invoke A on it. Concretely, B samples a bit better in order to decide which of the two DTH distributions to simulate to A. The reason for choosing beta uniformly at random will become apparent in the following slides. Then, B samples exponents Y and Z and invokes A on the input G S capital Y which is G to the little y and Z which is G raised to the power of Sy plus beta Z. Note that computing capital Z does not require knowledge of the secret exponent little s. As discussed when A outputs a bit B prime it also outputs a vector W satisfying this equation. But since G is a generator of the group this equation implies that W1 plus W2 times S plus W3 times Y plus W4 times Sy plus beta Z is equal to zero modulo P. It would seem that since B knows Y and Z it can solve this modular equation for S and output the solution. The problem is that this equation is useless for finding S whenever the coefficient of S which is W2 plus W4 times Y is zero modulo P. There are two cases to consider. In the first case W2 and W4 are both zero but since A is algebraic there exists a bit B such that when beta is equal to B then the probability that W is good for the pair of DDH distributions is at least epsilon over T squared. But in order for the two DDH distributions to remain distinct even when projected onto the support of W it must be the case that W2, W3 and W4 are all non-zero. The probability that beta is indeed equal to this good bit B is one half and hence the probability that W2 and W4 are both non-zero is at least epsilon over 2 T squared. So the second case to consider is the one in which W2 and W4 are both non-zero but still W2 plus W4 times Y is zero modulo P. Note that in this case B can plant S instead of Y and output minus W2 over W4 modulo P. Since we do not know beforehand if this event in which W2 plus W4 times Y is zero modulo P will occur the final reduction simply plants S instead of X with probability one half and instead of Y with probability one half. A simple argument yields that the overall success probability of B in computing little s is at least epsilon over 40 squared. So let's conclude. We presented the Decisional Algebraic Group Model or the DAGM for short. This is a generalization of the Algebraic Group Model that enables reasoning about Decisional Problems while still being a weakening of the generic group model and allowing adversaries to use the representation of group elements. Within this model we presented several equivalence results between strong Decisional Problems and variants of the discrete logarithm problem and we saw one of them in some more detail. To interpret our model and results note that on the one hand they can be seen as bolstering confidence that what we have in these strong Decisional Assumptions while on the other hand if one wishes to refute any of these assumptions then their efforts should either be directed towards extracting discrete logarithms or should deviate from all algebraic techniques that are captured within our framework. Okay, so with that I will conclude and thank you for listening.