 All right, so thanks, Ron. So I want to start by just putting up the statement of the decisional Diffie-Hellman assumption. So it says that we fix some cyclic group g of order q. Little g is going to be a generator of the group. And the assumption states that for uniformly random x, y, and z from zq, that no computationally bounded adversary can distinguish between seeing g to the x, g to the y, and g to the xy, and seeing g to the x, g to the y, and g to the z. So it seems fairly straightforward. But there's a fairly subtle point here that's a little bit ambiguous, which is when exactly is this generator g chosen? All right, so if you have the opinion that this generator g is just a fixed part of the group description, and it's chosen independently of the DDH instance, well, then you're in good company. This is also the definition of DDH in a number of foundational cryptographic works. But you might also hold a different opinion. You might think that in DDH, the generator should be chosen as part of the instance. It should be a random generator chosen along with the random x, y, and z. And if you think this, well, then you're also in good company. This is the definition of DDH in all of these foundational works. Okay, so I'm gonna call this first thing a fixed DDH. So just DDH with a fixed generator. And the second thing, random DDH, which is DDH with a random generator. And a natural question is, well, both of these assumptions are called the DDH assumption. So are they equivalent? And you can probably guess from the fact that the talk is not over, that they're not equivalent. And this is observed by Shupe in 99 and also discussed in some other works. Well, Shupe pointed out that this fixed and random DDH assumptions are not known to be equivalent. We don't know of a, it didn't give a black box separation or anything, but at least pointed out that there's no trivial equivalence between the two assumptions. So this is a slightly concerning state of affairs. And you can also ask, well, do we have similar issues for discrete log or CDH? Okay, and if you think about it for a little bit, you'll realize that we don't really have these issues for discrete log and CDH. This seems to be uniquely a DDH problem. Okay, so we can ask, when are these fixed and generator versions of these assumptions equivalent for discrete log and CDH? There are folklore reductions showing that the fixed and random generator versions are equivalent, but for DDH, we didn't know either equivalence or separations. Okay, so before I go on, I just wanna note that an adversary for a random generator version of a problem always implies an adversary for a fixed generator version of the problem. Okay, and this is simply because if you have a fixed generator instance, you can raise every single one of your group elements to a random power and feed that to your random generator adversary. Okay, so the random generator problem is always, at least as hard as a fixed generator problem. Okay, so just as a warm up, here's the equivalence between random generator discrete log and fixed generator discrete log. It's very easy. You're given the random generator discrete log instance, so g to the r is gonna be my random generator and I'm given g to the rx. My task is to find x. So what I can do if I have a fixed discrete log adversary is I run this adversary on g to the r. I run it again on g to the rx. Once I have r and rx, it's trivial to find x. Okay, so fairly straightforward. For CDH is another, there's an equivalence that's a little bit harder than the discrete log equivalence. I'm not gonna go through it, but I will point out that the folklore CDH equivalence actually requires knowing the totient of the group order. Okay, so if it's a prime order group, then just knowing the prime is good enough. But if it's a composite order grouping, you should know the factorization to make the CDH equivalence go through. So this motivates looking at this question in several different settings depending on what you know about the group order or its totient. So you can actually consider three different settings. The first setting is known prime order. This is the most common setting. You can also consider unknown prime order. And finally, you can consider known composite order but unknown factorization. Okay, and the simple discrete log equivalence goes through in all cases. But for CDH, the equivalence actually only holds the known prime order setting. And in all the remaining boxes, it's unclear if we have an equivalence or a separation between the fixed and random generator versions of these problems. Okay, so this is maybe the state of the art before this work. And so as a first result, maybe as a form of cryptographic housekeeping, we give black box separations between the fixed and random generator versions in all the remaining boxes where we didn't know equivalences. Okay, so I'm not gonna go into detail about how we actually do these black box separations, but they sort of go through in the way you would expect these black box separations to go through. Basically, you prove that in the generic group model, if you're equipped with an oracle that solves a fixed generator version of your problem, it's still hard for you to solve the random generator version of your problem. So in some cases, for the CDH separations, we need to make further computational assumptions for the black box separation to go through. But for DDH, our separations are unconditional. Okay, so this raises the question of, well, we have these black box separations and it might be possible now that you actually have concrete groups that realize these separations. And you can ask, well, what would happen if we had such groups? So as an observation, I'm calling it an observation because it's not actually terribly hard to figure this out, but if you have a group where fixed generator CDH is easy, the random generator CDH is hard, it turns out this implies something called a self-bilinear map. So this is a group equipped with a pairing E where the source groups and the target groups are all the same group. And self-bilinear maps are actually an incredibly powerful cryptographic objects and they're known to imply, for example, multi-party non-interactive key agreement, which is something that's only known from multi-bilinear maps or from obfuscation. So self-bilinear maps for this really powerful object, it's an open question to construct them. And it turns out that if you have a group where CDH is split so that fixed CDH is easy, but random CDH is hard, you can build from that a self-bilinear map. Okay, so how do we interpret this result? I don't want this to be, I don't want this to say that, well, the way that we should look for self-bilinear maps is to look for a group where CDH is split. Maybe the other way to interpret this, maybe the more plausible interpretation of this result, is really that on any group where we're comfortable making a CDH assumption, it would be really, really surprising if there were actually a split, that while there is a black box separation, it seems quite unlikely that fixed CDH will actually be easy, but random CDH be hard on any of the groups that we're comfortable assuming CDH on. But before I go on, I should mention that we have analogous results for the DDH setting. So, or similar sorts of results for DDH, we show that if a group has fixed DDH being easy, but random DDH being hard, this actually implies a simple black box construction of identity-based encryption. Okay, so for the rest of this talk, I'm sort of gonna change gears and look at a few other settings in which this fixed versus random distinction is particularly important. So, the first thing we're gonna look at is the fixed versus random distinction in the setting of generic pre-processing adversaries on group-based assumptions. Okay, so in pre-processing attacks, in particular on discrete log, we have attacks that work in in two phases. Okay, so first we have this online phase where an attacker is given computationally unbounded or a computationally unbounded attacker is given complete access to some group with some known fixed generator, little g. And in this offline phase, the attacker is trying to compute the most useful advice that will help it in the future to solve discrete log instances online. Okay, so it's gonna come up with this Aspet advice string and the online phase, now the attacker is gonna be computationally bounded, it'll have some time bound t. And this attacker is gonna be given fresh discrete log samples, so g, g to the x, with respect to that same fixed generator, g. And its goal is to find x. Okay, so this is a very popular setting in which to analyze generic attacks on discrete log because it mirrors the situations we encounter in real life where we only have a handful of cryptographic groups and we're often using the same generator over and over again. Okay, so it's known that for pre-processing attacks on discrete log that you can actually do fairly well when you have Aspet advice. So if you have a group of order n with Aspet advice and you have online time t, you can solve a fixed generator discrete log problem with probability st squared over n. So for comparison, if you don't have any advice, the best you can do is t squared over n. Okay, so this is your probability of solving discrete log given the resources s and t. So it turns out that this result is actually tight. So this is shown by Corrigan Gibbs and Kogan last year at Eurocrypt that a generic adversary does succeed with probability at most st squared over n. This is tied up to a pi logarithmic factors. But our observation in this work is that this doesn't fully resolve the complexity of solving discrete log problems. As in particular, this is only tight if you look at fixed generator discrete log. If you look at the random generator setting, the success probabilities change slightly. So why is that? Well, when you're given a random generator discrete log instance, you have two ways of solving it. One way of solving it is to use your fixed generator pre-processing attack, but then you have to run it twice. Remember that to solve a random generator discrete log problem given a fixed generator adversary, a fixed generator oracle, you have to run it twice. You run it on the generator and then you run it on the group element. So you can either solve two fixed generator discrete log instances, which then squares your success probability, or you can ignore the pre-processing advice entirely and just use the best algorithm that doesn't use pre-processing, which is the baby step, giant step algorithm, and that succeeds with the probability t squared over n. Okay, so in the fixed generator discrete log setting, your success probability is st squared over n, but in the random generator setting, this actually drops to this t squared over n plus s squared t to the fourth over n squared probability. And when this is your advantage, the st squared bound of CK18 is no longer tight. Okay, so in this work, what we do is we close this bound, we close this gap when we give a matching bound, showing that a generic adversary solves random generator discrete log with probability at most t squared over n plus s squared t to the fourth over n squared. I should mention that the techniques we use to prove this bound come from the pre-sampling techniques of Corettae, Dodas, and Grow that have been developed over the past two years. And in the paper, we have analogous tight bounds for the computational Tiffy-Hellman problem. And the takeaway from this is that, well, everything else being equal, pre-processing attacks don't do quite as well in the random generator setting. Okay, and maybe this is sort of intuitive because, well, pre-processing is really taking advantage of the fact that you're always gonna be solving these discrete log problems with respect to the same generator, and once you make that generator random, your success probability is gonna drop a little bit. Okay, so this is maybe spiritually a little bit similar to salting, right? Okay, so for the last part of this talk, I'm gonna shift gears again and look at a totally different setting where the fixed versus random generator distinction matters. And this is probably the setting where the distinction matters the most. Okay, so this is for assumptions over non-uniform exponents. Okay, so I should say what I mean, this is not really a common phrase, assumptions over non-uniform exponents. But if you think about CDH and DDH, right? All the secret exponents in these assumptions are uniformly random from ZQ. But sometimes we encounter assumptions in crypto that don't have this property. And in particular, this one assumption due to Canadi is DDH2. So this is exactly the same form as standard DDH, except now this exponent X can be chosen from an arbitrary well-spread distribution. Okay, so Y and Z are still chosen uniformly at random, but X, this assumption has to hold for X as long as it's drawn from any distribution that has super logarithmic entropy. Okay, so it's a very strong assumption. It's saying for any distribution you can come up with, we need this computational and distinguished ability to hold. And so why do we formulate this sort of nasty looking assumption? Well, it turns out that DDH2 implies obfuscation for point functions. Okay, so before I go on, let me just define what that is. Point function obfuscation. So point functions are functions that accept on exactly one point. So the CYX is a point function that has this hard-coded string Y. It accepts if its input is equal to Y, otherwise it rejects. And point function obfuscation, the goal is to give out an implementation of this program, of this function. But the implementation shouldn't leak Y. Okay, so you should be able to evaluate this point function on any X that you want and learn whether or not it matched Y. But the only way that you should actually be able to learn Y is if you happen to guess it yourself. So you can think of point function obfuscation as sort of a secure way to do password checking, for example. So it has a lot of interesting cryptographic applications. And it turns out that to achieve the strongest possible definitions of point function obfuscation, for those of you familiar, this is just worst case virtual black box obfuscation. We showed that actually you need really, really strong assumptions to build point function obfuscation. So in some sense, you do need these strong assumptions that quantify over all well-spread distributions. Okay, so for better or for worse, these assumptions are here to stay. So next I want to talk about this thing called non-malleable point function obfuscation. So this idea of non-malleable obfuscation was first proposed by Kanedi and Varia in 2008. But recently this has gotten some attention and in particular in last year's Eurocrypt, Kromer-Gotsky and Yoga have observed that, well Kanedi's original construction of point function obfuscation from DDH2 suffers from this malleability problem. Okay, so if I'm an adversary and I see an obfuscation CY, it's possible to actually mall this into an obfuscation that accepts on a related point. Okay, so CY accepts if your input's equal to Y, I can actually change that to an obfuscation that accepts on some related F of Y. Okay, I can't learn Y myself because of security to obfuscator, but I can mall the construction. And so the goal in Kromer-Gotsky and Yoga is to change the construction so that it's non-malleable. And in particular, they focus on the class of polynomial modeling functions. So to prove security, they formulate this new assumption called the strong-powered DDH assumption. It has this standard quantification over all well-spread distributions. And again, this is necessary to prove security of point function obfuscation. And the assumption states that as long as X is drawn from a well-spread distribution, then the following group elements are computationally indistinguishable from random group elements. So it's G to the X, G to the X squared, G to the X cubed, all the way up to G to the X to the K. And the assumption is that as long as X has enough entropy, then you can't distinguish between that and just random group elements. Okay, and so this assumption, it turns out, implies non-malleable point function obfuscation. So in this work, we take a closer look at this construction in particular, the strong-powered DDH assumption. And we make the following observation that this assumption is really, really sensitive to whether or not the generator is fixed or random. So in particular, if the generator is a fixed generator, the assumption's actually false and for a sort of silly reason. If the generator is false, then when we're quantifying over all well-spread distributions, we have to consider distributions that are gonna condition on the choice of G. Okay, so if a distribution can condition on the choice of G, you can just simply pick any X, you can randomly pick an X, such that G to the X begins with a zero. Okay, and then this is trivially distinguishable from a random group element, and you don't even need to use the higher powers. Right? So this assumption is false when the generator is fixed and when the generator is random, it's not obviously false. But the problem is that if you look closely at the KY18 construction, you'll realize that you actually have to use a fixed generator. If you try to switch to a random generator, an adversary can actually model it, and if you wanna put the random generator in a CRS, well, in the CRS setting, this result was already known. And so the goal here was to give a standard model construction of non-malleable point function obfuscation, but the underlying assumption can't hold if the generator is fixed. Okay, so we communicated this issue to KY18, and in response, they've updated their assumption, they have a totally qualitatively different assumption, and from that, they can get a construction of non-malleable point obfuscation. In this work, we took a slightly different approach. We gave our own assumption that suffices for non-malleable point obfuscation, and I'll give sort of a toy version of our assumption on this slide. The assumption just says that for any x drawn from a well-spread distribution, and for any a and r randomly sampled from zq, that you can't distinguish between a, just show this exponent a in the clear, and g raised to the ax plus x squared, and a in a random group element. And the point of designing an assumption of this form is that by giving this random a, as by injecting this random a, you defeat the ability to design a distribution on x that can make g to the ax plus x squared distinguishable from random. You can try to strip off this a, if you divide it out, the problem is that the x squared term is gonna have a one over a multiplying it, and this a was randomly chosen independently of you fixing the well-spread distribution. So it turns out this is sufficient for non-malleable point obfuscation, and when you introduce new assumptions, it's very important to justify them in the generic group model, right? So it's not clear whether or not we should trust these new assumptions, and maybe when you pose new assumptions that are very, very ad hoc, sort of the only way that you can inspire confidence is to show that no generic adversaries can break your assumption. Okay, so in particular we prove that our assumption holds in the generic group model, but we have to be really, really careful about the way that we prove security, because we don't wanna fall into the same trap that makes the previous version of this assumption false. So in particular, when we do the generic group model proof, you have to allow the distribution, the choice of the well-spread distribution, to be fixed after the generic group labelings are fixed. All right, so in the generic group model you have a representation or some bit string for every single group element, and you have to allow the distribution itself to see the truth table of the labeling function before you specify the distribution, okay? And if you don't allow this to happen, well then you can actually prove blatantly false assumptions hold in the generic group model, such as G to the entropic axis indistinguishable from a random group element. And so along the way of proving that our assumption holds in the generic group model, we looked at a whole bunch of other proofs in the literature that had justified these non-uniform assumptions before in the generic group model, and it turns out that all of the existing proofs of DDH2, so Kennedy's strong DDH assumption, have the same issue of the generic group labeling function being sampled independently of the well-spread distribution. And so while the conclusion of all these proofs is not false, that indeed DDH2 is hard in the generic group model, you could use these same proof strategies to prove blatantly false assumptions. So one of the contributions of our work is sort of pointing out this issue in generic group proofs whenever you have to deal with these assumptions that quantify over all well-spread distributions. And towards remedying this, we give a new generic group model proof that DDH2 holds against generic adversaries, and we make sure that the well-spread distribution is picked after the labeling is fixed. So that's it for my talk. Thanks for your attention. Thank you. Questions? Are you aware of any cryptanalysis of concrete groups with this assumption? Whether this is true for any concrete groups, the DDH2 assumption, or what you came up with in your work? Whether it's true? Yeah, yeah, for like a concrete group. Has there been any analysis of this assumption about concrete groups? I'm not aware of this. I mean, I guess you would be more aware of it than me. So I don't think there's any concrete group that is this assumption that was shown to be false where, you know, DDH is holds, right? As far as I know, maybe other people know. I was just curious, he said you prove a bound in the generic group model. I wasn't sure whether it was in Tropic DDH or in Tropic street log or whatever. It was in Tropic DDH, basically. Oh, so we prove two different assumptions holding a generic group model. So yeah, one of them is DDH2, and one is this new assumption that we had to formulate for. I was curious about the DDH2. What kind of concrete bound do you get? You get kind of what you would get. Actually, I'm just curious what kind of concrete bound you get and whether you know whether it's tight. Yeah, I don't know what top of my head. Can talk about that one though. Tell me please, Q is prime? Sorry? Q, Q is prime? Number? Q. Oh, oh, oh, so for, yeah, say it's prime here. Yeah, for this whole talk, assume that all the group orders are prime except for on the one slide when they weren't prime. Okay, let's thank the speaker again.