 So our next talk is DJI Q all over again tighter and broader reductions of Q type assumptions. The speaker is Mary Muller, Mary please. Can people hear me okay? Okay this is a joint piece of work with myself, Melissa Chase and Sarah Mechel-John. And our general aim was to analyze the underlying assumptions that cryptographers are using to prove things secure. Insecurity reductions generally what you're trying to do is you're trying to say this scheme holds because if it doesn't then the underlying security assumption breaks. Traditionally the underlying security assumption tended to be either RSA or discrete log. However, as cryptographers have got more adventurous with the sorts of schemes that we want to cover we have also got more adventurous with the sorts of assumptions that we are willing to use. And therefore in order to remain confident in security proof it is quite important to analyze these underlying assumptions. In our work we look at showing that certain types of Q type assumptions are implied by subgroup hiding. Our Q type assumptions are dynamic assumptions in that their security depends not only on the security parameter but also on some value Q, which is typically in the schemes that use them related to the number of oracle queries. Subgroup hiding on the other hand is a static assumption so security depends solely on the security parameter. Therefore the fact that these two things are related is quite a nice result. One motivational example that I'm going to use is broadcast encryption. The BGW broadcast encryption scheme is based on an assumption called the QBDHE assumption which is something that our framework can cover. This assumption is essentially a fill-in-the-gap assumption. It says given G and G to the alpha and G to the alpha squared and all of these other powers of alpha find G to the power of alpha to the Q plus 1 in the target group. It was essentially inspired by a paper a couple of years ago, the original data Q, by Melissa Chase and Sarah Mecheljon and they managed to show that certain classes of Q type assumptions are implied by subgroup hiding and parameter hiding when they are instantiated in composite order asymmetric bilinear groups. The way they did this is they got enough a bound. They managed to show that the probability that an adversary breaks a Q type assumption is less than order Q times the probability that they break subgroup hiding. Note that this upper bound is a loose upper bound. Roughly speaking these are the types of Q type assumptions that they can cover. Note that they can't cover the situation where the adversary is given information in both groups and asked to either compute or decide something in the target group. And this is specifically the setting that we're trying to look at. We can cover for example the QBVHE assumption. We cannot cover all of these assumptions. We require there to be some input that the adversary is given which appears in the target group element that it is challenged on which the adversary doesn't have any extra information about. We also managed to get a tighter upper bound. When we instantiate the Q type assumptions in composite order groups with an extra subgroup we can show that the probability that an adversary breaks the Q type assumption is less than order of log Q rather than order of Q times the probability that the adversary breaks subgroup hiding. And as far as we know we are the first people to ever get any kind of tightness in this sort of setting. This is my plan. Most of the novel content will be in the tight reduction section. The next bit is just definitions and assumptions really. Okay by linear groups. These are when you have three elliptic curve groups, two source groups and one target group. The asymmetric setting is the case when the two source groups G and H are different. The symmetric setting is when they are the same. We also need some kind of bilinear map. So that means that E of G to the A, H to the B is equal to E of G H to the power of AB. In subgroup hiding we can model this as a game that an adversary plays. The adversary is either given an element that is generated using both subgroup generators or it is given an element that is generated using just one subgroup generator. But just by looking at this element, the adversary has no idea which group it's in or it cannot figure it out with better than negligible probability over guessing. If it does then it wins again. The other thing we use is parameter hiding. And this is not an assumption but a statistical property of the group. In asymmetric composite order or even just composite order bilinear groups it is implied by the Chinese remainder theorem. In this setting the adversary is given a component in which the exponents of the subgroup generators are either correlated or they are not. And we can prove in this setting using the Chinese remainder theorem that there is no adversary that can win this game. All it can do is guess. This is a notation that we will use later to denote when the elements are correlated and when they are not. Okay so the sorts of Q type assumptions that we are looking into are used in the literature quite frequently when people want to prove things are secure against lunchtime attacks. The adversary has access to some kind of decryption oracle but it cannot query its decryption oracle on the element which it is going to be challenged on later. And this corresponds exactly to the sorts of assumptions that we can cover. If we have some kind of component in the H group in our case which the adversary does not have any extra information about then if it is given a target group element and asked to either decide or compute it then the adversary should not be able to do this. Provided that subgroup hiding holds. So the aim of our reduction is that we start by modeling the Q type assumption as a game and then we use subgroup hiding and parameter hiding over and over and over and over again until we get to a situation where we can say that the game is impossible. There is no adversary which can solve it. We start with a game which is a Q type assumption game where the adversary is given a lot of inputs and then this is given either a target group element which is a meaningful function of these inputs or a target group element which is chosen completely at random. And it wins if it has better than negligible advantage at deciding whether it is meaningful or random. The game that we transition to, the adversary is given a component which is distributed uniformly at random and we can prove that this is actually the case using techniques from Jason Mecheljohn's paper from a couple of years ago. In the other case it is given an element which is actually random. So the adversary has to decide whether the component is random or random which we can say that the adversary can't do because they're random. In Jason Mecheljohn's paper the loose reduction they have two subgroups and they first transition to a game where the two subgroup generators are uncorrelated. These subgroup hiding in order to make a shadow copy of the elements from the first subgroup into the second subgroup. They use parameter hiding in order to change the randomness in this extra element and then this is equivalent to there being two layers of randomness in the second subgroup. When they do this again there will be three layers of randomness and eventually once they've done it Q plus two times they get to a situation where they can argue that the components of the adversary is being given are actually distributed at random. In our paper we can take advantage of the fact that there is an extra subgroup in order to do this in a log number of rounds. The way that we do this is we first transition to a group into a situation where the adversary's element has something in G1 and something else in G2, two layers of randomness in G2. We use subgroup hiding in order to make a shadow copy of the second subgroup into a third subgroup. We then use parameter hiding in order to independently change both of these layers of randomness and then we fold the two layers of randomness back into the second subgroup. Meaning that we now have four layers of randomness in the second subgroup and when we do this again we will have eight layers of randomness in the second subgroup. Meaning that we will get to the situation where we have sufficient randomness in order to prove that it is distributed uniformly at random in log two of Q plus two rounds as opposed to Q plus two rounds. In our final result, the adversary is given some inputs from G and some inputs from H and H hat element. Note that the functions of the exponents of G cannot be, need to be linearly independent from the function that the adversary is challenged on. So the advantage that the adversary has of deciding this element from random is less than some function of log Q times the advantage it has of breaking subgroup hiding. And this is our slide from earlier. So, so far we've been looking at asymmetric schemes. What about the symmetric case? Our motivational example, broadcast encryption was originally given in the symmetric case. But the assumption that we prove secure is the asymmetric version of the QBDHE assumption. So why am I talking about BDW? Well, it's because we also show how to cover symmetric assumptions in the case where you have an extra subgroup. We need an extra subgroup in order to add in some randomness into the components that we are giving to the, giving to the adversary so that the adversary cannot trivially break subgroup hiding. Because if there is a symmetry in composite order groups, such that if we gave the adversary an element in G1 and an element in H2 and paired them together, then the adversary would always get one. So when we're trying to get the reduction to work, we have to be very careful to avoid these situations. Alternatively, we also demonstrate how you can use techniques from Abe and others in order to move your symmetric scheme into the asymmetric scheme so that you can directly use our result from the three subgroups. Essentially, this technique requires dependency graphs. They map out where each of the elements in the scheme and the reduction, which side of the pairing they appear on. So for instance, if you had an element in the public key that only ever appeared on one side of the pairing, you could put it in group G. But if it appeared on both sides of the pairing at some point, either in the scheme or in the reduction, then you would have to make the public key include the element in both G and in H. The schemes that we cover, we still need this challenge component, which is denoted as GC here. And we need that this component only appears on one side of the pairing, which it does in the four schemes that we translated so that our framework could cover them. Another example of a scheme we could cover is an identity based key encapsulation mechanism. We also cover an attribute based encryption scheme. This is a selectively secure attribute based encryption scheme, not totally secure one. And we cover a hierarchical called identity based encryption scheme. Pretty much ready to wrap up. I should say that we have, of course, not solved assumptions in cryptography and we haven't even solved Q-type assumptions in cryptography. There's still some open problems and I'll mention three of them. The first thing that I'm sure you all noticed in the speech is all of the examples we've covered are in composite order groups. It would be nice to get them in prime order groups, but we rely on parameter hiding and we need it not just for linear functions, we need it for more general polynomials. And currently this has not been done in prime order groups. If someone could figure out how to do this, then our results would directly translate over. Another type of assumption is non-falsifiable assumptions, which are often used to build succinct zero knowledge arguments. They are so far only ever proven secure in the generic group model. So a more formal analysis of these in a more standard model would be extremely helpful. And lastly, there was still a red base on the table earlier. We haven't managed to say anything about the situation where the adversary has given elements in the source group and asked to decide about an element in the source group. So that is still open. Thank you very much for listening everyone. We have time for questions and comments. We said you hope of kind of a nesting your reduction to reduce the security loss from lock queue to lock lock queue. Is there any hope of reducing the security loss from lock queue to lock lock queue? From queue to lock queue is what I meant. We have something which is almost tight, but we still need the lock queue. So you can get down to lock lock queue. Good question. I don't know. We could try. But this is not what we do. We only get to lock you. Any other questions? Comments? So you're asking one of your open problems how hard or secure the prime, the queue assumptions would be in prime order groups. So on the one hand, I mean, we have ways to sort of analyze that using the generic group model. And for instance, in 2004 paper, we showed that the QSDA's assumption has a certain cube root type security, which then was actually shown to be tight because of some generic attacks against those assumptions, which achieved exactly the bond that we predicted. So my question is what else are you asking in terms of hardness of those prime order queue type assumptions without actually getting into breaking 80 curves? I mean, we are trying to get something which is more general than just covering one cube type assumption. We want to cover lots of Q type assumptions. Okay. But the general, the generic, sorry, the generic model analysis works very well for a very broad class of assumptions. So we are not analyzing things in the generic group model here. We're analyzing them in the standard group model. The generic group model is another way to do it and might be the preferable way to do it at the moment in terms of prime order groups because we do need parameter hiding. Any other questions, comments? If no, let's thank Mary. Thank you.