 Our next talk is the Towers of Classification of Non-Interactive Computational Assumptions in Cyclic Groups by Desan, Gaddafi, and Jens Gloss. And Jens will be the talk. Please start. Thank you. Okay, so the setting is that we have a prime order of the Cyclic Group, we know and people have worked a lot with Cyclic Groups. And the question we're interested in is kind of what kind of advice should we give to cryptographers about which assumptions to use? Which advice should we give to cryptologists about which assumption they should try to break? So let me just briefly mention that we have the generic group model where we use straight the adversary to using only the generic operations. So you can multiply elements, you can test whether elements are equal. Okay, so we have a bunch of assumptions you could make in a Cyclic Group. So here's some examples of assumptions. But we've seen a proliferation of assumptions. So these are not the only ones, especially with pairing-based protocols we have in with assumptions. So that's the kind of assumption we want to study. We focus on one special class of assumptions, in the computational assumptions, where there's some target group element that the adversary is trying to compute, right? So in the computation with Hillman assumption, for instance, the adversary is given these three elements and is trying to compute t to the a to t. So in this one, I'm mainly focusing just on the Cyclic Group case, but our results do also translate to the piloting groups, okay? So that in the very end of the talk. Okay, so let's see, what can we say about these non-interactive computational assumptions in Cyclic Groups? So suddenly we want this three-log room problem to be hard to solve, otherwise everything's broken. And specifically for these assumptions that we're looking at where the adversary's trying to compute some target group element, we also need to assume the computation with Hillman problems to be hard. If you can solve the computation with Hillman problem, then you can break all of the assumptions that we consider here. Okay, we could go in the other direction, look at the top, okay? So typically when we introduce a new assumption, the first thing we do is that we prove that it's securing the generic group models. In other words, an adversary that only uses generic group operations cannot break the assumption. And that's sort of the minimal thing we need to prove about an assumption. If it can be broken by a generic group adversary, then you can just use those generic group operations to break the assumption. But this is not necessarily sufficient to guarantee the security of an assumption. So in the middle, we have all these assumptions we've made. Some may be stronger or weaker than others. And some of them may be possible to break, and some may not be possible to break. And this is the class of assumptions that we're trying to study here. See if we can find some sort of order in this book, I guess. And one particular question we asked ourselves is, is there some sort of standard model assumption we could make here, an Uber assumption, that would imply the security of everything here? Right? So certainly if we trust the generic group models, we assume everything in generic group models is secure, then that would imply that everything is secure. But we also have an example by then that shows that at least for some standard assumption, the generic group model is actually insecure. So we would like to find instead of just relying on the generic group model, some sort of standard assumption that would imply everything else is secure. And what we get is not quite that, but almost. So we identify two families of assumptions, the Q, G, G, H, E assumptions, so that's general exponent assumptions, and a new class of assumptions that we call single fractional assumptions. And it turns out if both of these assumptions hold, then everything here is actually secure. And we can furthermore try to kind of break a little these into classes, so some assumption will be implied by the Q, G, G, E assumption, and some will be implied by the Q as simple fractional assumption. I did the bridge and it turns out it's a little different in shape and form and size because this is a very clean assumption. This is not quite as clean assumption. Okay, so I'll try to be more precise now what I mean and what exactly it is that we achieve. So first I want to define what is that noninteractive computational assumption in a safety group that we are looking at. So first let's start very general, but just with noninteractive computational assumptions. So these are assumptions where we have some instance generator and it takes a security parameter. That's input and output, some public output and some private information, some state. And then we have a solution verifier that will take this stuff here and also a solution and then try to decide whether this is a good solution or not. And this instance generator here interacts with an attacker. So the instance generator generates the public and information, the private state, and gives the public information to the adversary. Now the adversary tries to break the assumption, comes up with some solution, and then we use a solution verifier to check whether we accept that solution or not. And we say that this noninteractive computational assumption holds if all polynomial time adversaries would have a negligible chance of finding a solution. Okay, so what does this look like in a safety group? So the instance generator will generate a safety group and then it will generate some polynomials, actually fractions of polynomials, sample some random inputs to these polynomials and use those in gets bonus to generate a bunch of proof elements that we get to the adversary. And we may also get some extra information prime to the adversary. We also assume that the adversary knows what these fractions of polynomials are. Okay, we allow n-variant polynomials of sort of degree d of s here. And the solution verifier takes all this as input and then it makes some basic checks. Okay, so let's see what happens here. So a purported solution has another fractional polynomial. This is created by the adversary. Okay, and that defines the target element that the adversary is trying to compute. Then we also make some checks on this target polynomial here that the adversary reduces, namely that it should not be in span of these original polynomials and that's just to protect against generic operations. If it wasn't this span, then it could just put generic operations correctly assumption. So this means that we're only looking at assumptions that are plausible in the generic group model. Okay, so let me give a couple of examples. So in computation of the element assumption, we get these three polynomials of polynomials and then the core algorithm here, so the solution verifier will verify that the output polynomials by the attacker is actually x1 times x2. In other words, that attacker will be given g to 1, g to the x1, g to x2 and will have to try to compute g to the x1 times x2. Okay, we can also model the QSGH assumption this way. So here the attacker is given g to the 1, g to the x up to g to the xq and then they try to come up with this fraction here where the fraction is 1 divided by x plus c for some c chosen by the attacker. And it wins if it can compute g to the 1 over x plus c for this known c chosen by the attacker. And in general, I think almost any assumption I've seen in literature falls into this framework here. So this is quite general and captures everything we know about the protocols. Okay, so let me do some simplifications here. So we say an assumption is simple. If these polynomials are, that should be a 1. So in other words, we have polynomials instead of fractions of polynomials in the topical output. You see the assumption univariate if there's just 1x here instead of multiple. See the assumption is polynomial if this s of x here is 1. In other words, it's a polynomial target from the episode. And otherwise we say it's a fractional assumption if s of x does not divide our place. Okay, so that's a toss up target assumptions that we're interested in and study. And what we get is this hierarchy of assumptions. So if you're here just trying to represent all possible target assumptions you could imagine. And then we can restrict ourselves to the simple target assumptions. And it turns out that the simple target assumptions implied all target assumptions. So if you can break a target assumption, then there's also a simple target assumption that you can break. And the simple target assumptions in turn are implied by the univariate simple target assumptions. And these in turn are implied by the polynomial and the fractional assumptions. And these in turn are implied by the general vice-difficult element exponent assumption and the simple fractional assumption. So that's where we get our Uber assumption family that implies the security of all the target assumptions. Okay, and then we can also ask ourselves what is implied by the just the general vice-difficult element exponent assumption. And well, that's a significant part of assumptions, the polynomial assumptions, and everything that's implied by that. And that's implied by the just the general vice-difficult exponent assumption by itself. And the same goes for the simple fractional assumption also implies a lot of assumptions just by itself. Okay, so let me just write down what are these assumptions. So this was the general vice-difficult exponent assumption where you get a given g to the various powers of x and you have to compute the middle power of x. The simple fractional assumption, here you're given g up to the g to the x up to g to the x to the q. And then the adversary has to output r of x divided by s of x with the degree of s is larger than the degree of r, right? So this is a non-trivial fraction. And then it's trying to compute this group of elements here. So in other words, in the generalized tk-him element assumption, the target the adversary has to compute is completely defined just by the instance itself. Whereas in the simple fractional assumption, the attacker has some influence on what kind of target it wants to try to compute. Special case of the simple fractional assumptions is the qsgh problem where the attacker tries to compute g to the 1 0 a plus c plus some known c. And as far as we can tell in the literature, so you could imagine all these fractional assumptions, but in reality what is used in practice is just the qsgh assumption. We have not seen use of any other fractional assumptions. So here's another way to look at all these target assumptions. We could look at sort of a hierarchy, so certainly if you want to trust any target assumption, then you need to trust that the cdx problem is hard. And then you could say, okay, there are some assumptions that are applied by the 1 ggg and 1 s-prac assumptions. Larger class of assumptions applied by the 2 ggg and 2 simple fractional assumptions and so forth. So we get some sort of hierarchy of assumptions depending on how large we made this q here. Okay, so this is interesting. Now we know that all of the target assumptions that are applied are some relatively clean uber assumptions. Then we can ask how do these uber assumptions relate to one another? And in general I mean it's trivial to see this invitation down here. It seems to be that these are straight implications, at least if you get sort of a generic pool model or a little of this form here, then you cannot break the ones above it. If we go down to the bottom here, then the one generalistic element exponent assumption is actually equivalent to the computation Tiki-Helmen assumption. We do not know anything about this relationship over here. And then you could also ask, could we imagine that one of these assumptions by itself was a uber assumption? So could it be that the generalized Tiki-Helmen exponent assumption was a uber assumption that implied all the simple fractional assumptions or the other way around? And it seems to be that that's not the case. At least we can prove that the simple fractional assumptions do not by themselves imply the generalized Tiki-Helmen exponent assumptions. It's an interesting open problem going the other way around and a conjecture that you would also have this kind of non-implication on the other way, but this is not something that would be able to prove. Okay, so what happens in an asymmetric bilingual group setting? So now we have a bilingual group generator that generates three different groups here. And we have a bilingual map that maps the source groups into the target group. And the same pattern emerges. We also get two uber assumptions. In the source groups we have sort of a variant of the generalized Tiki-Helmen exponent assumption and a variant of the simple fractional assumption that again gets us this hierarchy that implies all the non-interactive computational assumptions you could formulate in the source groups. And in the target group, we get a similar picture except that we have to replace this bi-linear generalized Tiki-Helmen exponent assumption with what we call a bi-linear gap assumption, which is a bit more complicated. So not quite as clean as the GGE assumption. Okay, so there's still a lot of open problems here. So this is a first step and we hope that we could analyze many more assumptions using these techniques. So as I said, we know that the simple fractional assumptions do not imply the generalized Tiki-Helmen exponent assumptions, but we don't know anything in the other direction. We don't even know for any large q generalized Tiki-Helmen exponent assumption and it implies what not imply the strong Tiki-Helmen assumption. These simple fractional assumptions, not quite as simple because the target cannot put any choice of polynomials r and s. So it would be interesting if there's more structure to discover in those assumptions. As I said, the bi-linear gap assumptions are not quite as clean, so maybe there's some work one can do to simplify those assumptions. I should also mention here that the structure that we have found is not tight, so you do get non-tight reductions. So security parameters do change as you go to the generalized Tiki-Helmen exponent assumption with simple fractional assumptions and see them as looper assumptions for the target assumptions. And then of course there's a question of how far can we go in the hierarchy of computational assumptions. So what we've studied are the non-interactive assumptions. And I guess you can classify all these assumptions you can make in cyclic groups in various layers. So I'll put it at the bottom layer. I'll put assumptions where the attacker has to compute some specific group element which is defined by the public instance. Then I'll put above that a layer where you say the attacker has to compute some group element which is defined by the public instance and the attacker's output so the attacker collaborates with the instance generated to define the specific group element that the attacker has to compute. And those are the two types of assumptions that are incorporated in target assumptions and that we cover with this framework. But then you could also ask what if the attacker instead has to output several group elements that have some sort of relationship to each other. And that's not covered by our framework. And it's sort of not covered in a provable way because in those classes assumptions would be knowledge of exponent assumptions which are not falsifiable. And we can't just work with falsifiable assumptions here. Simple fractional assumptions, the generalized key element exponent assumptions are falsifiable. So we would not expect to be able to cover those types of assumptions here. And you could go beyond that and you can ask what about interactive assumptions where the attacker and the instance generate actually communicate back and forth to generate the challenge that the attacker has to compute. And for those we don't know anything. So it would be interesting if there's a similar structure in those assumptions. Okay, so let me now return to the conclusions I've said that we wanted to give some advice to cryptographers and crypt analysis. So what is that advice? Well, so what we see is that in the second group the generalized key element exponent assumption and simple fractional assumptions imply the security of all the target assumptions. So we view this as sort of a canary in the coal mine barrier, right? If you make any assumption, a target assumption, then you're fine as long as these assumptions are not broken. And if you want to get extra safety maybe you want to avoid the simple fractional assumption case. In that case maybe you should just work with assumptions that are of polynomial nature. In other words imply just by the generalized key element exponent assumption. Which is already a pretty rich class of assumptions. Okay, what if you're a crypt analysis? If you do crypt analysis, we would recommend these specific assumptions. So we wanted to identify some specific targets that would be of interest to attack. And we think these are good targets. Certainly the generalized key element exponent assumption is pretty clean and easy to formulate and hopefully also then easy to sit down and start to break. And somehow we feel that there's been a lot of crypt analysis work going on that it's natural, you know, you get more famous if you solve the discrete logarithm problem. But it's also a harder problem to break and maybe you would want to tackle them easier problem first. And somehow I also think it would be morally correct to tackle the easier problems first. Let's first try to tackle these canary in the coal mine assumption and if that falls then people have another warning maybe some of their assumptions still hold even though these ones are broken, but at least they have some warning that something not true is going on here we better start moving away. And with that I'll conclude my talk. Thank you. You probably start screaming at me. Kiss me, kiss me. So do you have any sort of size found on the cube in these assumptions? Is it secure to be polynomial size or can it grow the problems of polynomial size? Yeah, sure. So let me just see if I can take back to the definition of the target assumptions. So basically what happens is that it depends on D and M and N that is used to define the target assumption and choose some polynomial in D, M and N I don't remember exactly what the polynomial is some low degree polynomial in those but there's definitely a loss of tightness in the reduction. The reason I ask is because we know for example there's this chum attack that reduces the complexity in where you've got good piles of x, right, d to the x to the n. There are some tax-oriented and non-generic that they require especially low input. Right. So what about interactive assumptions? Can you make any sort of statement about that? I had no clue whatsoever about interactive assumptions. Very interesting question but not something that we've looked carefully at and I really don't know what kind of pattern or lack of pattern will emerge. Okay. So you briefly mentioned knowledge assumptions so do you have no hopes or are you going to have some general framework for that? So I definitely think there's books to find some general framework for knowledge extractor assumptions. But it's very clear that they're not going to imply by the general definition of an exponent assumption simple fractional assumption because these are efficiently falsifiable assumptions and the knowledge of exponent assumptions falsifiable. So you would have to have some different and stronger Uber assumption if an Uber assumption exists for the knowledge of exponent assumptions. But I'm definitely an interesting question whether that is the case or not. The last question, we have three minutes. Okay, so sounds the end.