 Welcome to the last session of EuroCrypt. So the first talk is Automated Unbounded Analysis of Cryptographic Constructions in the generic group model by Miguel Ombronage, Hilbert, and Benedict Schmidt. And Miguel will give the talk. Hi. Thank you for the introduction. OK, I would like to start by talking about our goal. We are interested in building a tool that on input a cryptographic construction and a security notion, automatically performs an analysis and outputs either the scheme is secure, and in this case, we also want a proof of security, or the scheme is insecure. Of course, we cannot hope to solve crypto and build a perfect tool that can work for every scheme. And we will focus on a particular framework that I'm going to talk about in a moment. But first, let's talk about the motivation. In my opinion, there are two reasons to build automatic tools. The first one is that pen and paper proofs are tedious, are really hard to write, are difficult to review, and in fact, they can contain errors. And in practice, there are some examples of published papers with invalid proofs. So here, I give a reference to this paper, because in this one, you can find invalid proof in the generic group model. The second reason is what we call synthesis. With automatic tools, we can generate hundreds of schemes and analyze them automatically. And then for those that are secure, we can select the best ones in terms of simplicity or efficiency. So in this work, we are going to focus on computational experiments instead of decision-like experiments. So in particular, there is an adversary that has to compute some value. And for that, he will have access to an oracle that makes some hard computation for him. And the main contribution of this work is that we can prove security, even if this interaction with the oracle can be done an unbounded number of times. We also are going to focus on constructions defined by linear groups. And the proofs will be in the generic group model. So if we look at related work, there are tools to do a similar analysis, like the generic group analyzer that allows you to prove the security of assumptions. And also, it's an extension where they also can deal with interactive assumptions and constructions. And in fact, they do synthesis on structured preserving signatures. But these tools have limitations. The first one cannot deal with adaptive adversaries. And this is not a problem in the second tool. But the second tool can only produce proofs for a fixed number of queries. So the interaction with the oracle has to be bounded. And in fact, in practice, if you want the algorithm to terminate, this number has to be small. So depending on the scheme, it can be between two and five queries. So of course, this is far from real security. In practice, we want to reach unbounded security. So our contributions are methods to reach both to can deal with adaptive adversaries and also to produce proofs for an arbitrary number of queries. For this, we develop methods to automatically do this. And we also implement them. And you can find them in GitHub. It's open source. Now let's talk a bit about bilinear groups. A bilinear group consists of three groups of prime order. The first two are called the source groups. And this one here is called the target group. And we also have a map, a bilinear map, with this property here that roughly speaking allows us to compute a multiplication in the exponent with the limitation that we are given an element in GT. And this is useful, for example, to build signature schemes where the scheme is defined over G1 and G2. And then the map is used to check the verification equation that is done in GT. I also want to talk a bit about the generic model. This is an idealized model where the adversary is supposed to not be able to exploit the representation of the group. And this is done in this way so he has access to handles instead of group elements. I'm going to show you an example. On the left side, we have the adversary. On the right side, we have an oracle that implements a group. And the adversary may ask questions like, OK, give me a generator. The oracle will compute a point in this group that is a generator. And we'll create a handle pointing to this group element. And the adversary will only be given this handle. Now he can still ask questions like, OK, give me the handle one times itself. The oracle will see that handle one points to G. It will compute G2 squared. It will create a new handle to this point. And it will return the handle. So in this way, he can still compute more things, but only using handles. And we also allow quality checking. So for example, if two handles point to the same element, quality checking should be successful. OK, so this is not the standard model, but still there are some reasons to work on the generic group model. One of them is that it gives minimal requirements for security. Everything should be secure in the generic group model. Otherwise, there exists an algebraic attack, and you should not use that construction. Another reason is that for some groups like elliptic cures, the best known algorithms are generic. And also because in the generic group model, we can prove lower bounds and build optimal constructions. And this is something that is not that easy to do in the standard model. I'm going to talk only a bit about this. So the idea in automated proofs in the generic group model is that we start from, so the security experiment is translated into algebraic conditions. And these conditions can be checked automatically. And the important thing is that we can really derive security by analyzing these conditions. And we don't need to rely on additional assumptions like in the standard model. So in fact, in the generic group model, you can directly prove security by analyzing these conditions. So as I was saying, the adversaries strategy can be seen as the election of some parameters. And if these parameters verify a system of polynomial equations, then the scheme is insecure. There exists an attack. On the other hand, if no election of parameters can satisfy all the equations, the system is secure. So just to compare previous work with our work in previous work, the number of queries was fixed. So they had a fixed size system of equations, but an exponential blow up in the size of the system. And they could analyze the system using S&T solvers and computer algebra systems. With our approach, we want to reach unbounded security. And for this, we need a different approach. And we have a grammar that includes these big ops, like summations for all quantification and existential quantifications. And to deal with this, we need to define our own rules to simplify the system. I just want to give an example of a scheme that can be analyzed with our tool. And I'm going to show also the security experiment. So let's assume we have a valiant map. We sample two group elements. In set B, it is going to be the secret key. And we feed the adversary with the public key. Now, the adversary has access to a sign in Oracle that an input and message is going to compute a signature. And this is done in this way. So the Oracle samples some randomness and returns three group elements corresponding to the signature. This step can be done several times, in fact, an unbounded number of times. And then the adversary has to come up with a forgery. And a forgery is a message and a signature that consists of three elements. So now, the adversary wins if this forgery satisfies the verification equations. And we also require this thing here. That is, of course, the message. The forgery message has to be different to those sent to the Oracle. So now, this is the security experiment. And as I said, we can translate it into algebraic conditions. So I just want to give the intuition of how this can be done. So first, we need to look at the elements that the adversary knows in G1, those in yellow. And the elements that the adversary knows in G2 in green. So he can use elements in G1 to produce, for example, this element in the forgery, because this one has to be in G1. And he can use the green elements to produce the other three. And this is done as a linear combination of the elements, where he can choose these alpha parameters here. And the same for the other elements in the signature, this time only using elements in G2. So now, as I said, this forgery, to be a forgery, it needs to verify some verification equations, and in particular, this one here. So this is only anomaly expressed in the same thing with deluxe. And this equation can be translated into this equation here. I just want to point out that we have parameters that the adversary can choose. We have variables. And this equality must hold for every value of the variables. And then we have a special term here, which is not a parameter, but it's also not a variable. And because the adversary is adaptive, this is, in fact, a polynomial. But the shape of this polynomial depends on the adversary's strategy. So this will be a problem later, I'll show you later. So the idea is that from the security experiment, we go into winning constraints. And we have a theorem that says that if the winning constraints cannot be satisfied, then the security experiment is hard and the scheme is secure. So now, what we do is we build methods to automatically analyze these winning constraints. This is the grammar that we are using. I am only showing it to show that we need the big ops, like the summation, the forward quantification, and the existential quantification. And I'm going to talk a bit about the constraint solving rules. The idea is that we start from a system of equations and we apply a rule that is going to split our system into smaller systems. And we want the rule to be sound. This means if this system here has a solution, then we want that at least one of those have a solution. I also want to point out that these rules may not split in several cases, may go from one system into another. And we also consider the case where they go to the empty set. And for us, this is a contradiction. This is our goal. We want to start from a system of equations and go to the empty set. So the idea is that the proof will start from a system of constraints. And it will consist of a proof tree where we split every constraint into smaller constraints until we can find contradictions. And contradictions for us are one of these two equations, 0 equals 1 or 0 equals 0. And the idea is that if we can find a contradiction for every leaf node, we know that the original system has no solution. And this is a proof of security. So now I will explain a bit the most important rules in our work. So we have equivalences between equations. And this is only to simplify every system into our normal form. And then we have this Gromner-Basey simplification, which is really important. And the idea is that it allows us to simplify systems of equations using the relation between the equations. But as you can see, we cannot apply Gromner-Baseys to this thing here because we have these big ops. So for doing this, we make an abstraction and we do it in several steps. I'm going to show you just the intuition of how we can do this. The first step would be to make a common for all quantification and also to permute some of the indices. For example, this equation here has been permuted. And we also have this one here. Now we create a translation into new variables. And after this translation, we already have a system of equations that can be analyzed with our Gromner-Baseys algorithm. And reducing it into a simpler system. And now we can undo all these steps, going back to a system that is equivalent to the original one. But it's simpler because we have simplified things. In practice, this Gromner-Basey simplification is more complicated because we also have to simplify inside some equations. But this is just an intuition of how it works. We also have the coefficient abstraction. This rule is also really useful. It allows us to derive new equations. So for example, it's as if we have an equation like this one, which is a polynomial on the pink variables. We know that the coefficient of every monomial in the left-hand side has to be equal to the coefficient of every monomial in the right-hand side. So for example, the coefficient of 1 in the right-hand side is alpha 0 has to be equal to the coefficient of 1 in the right-hand side, which is beta 0. So thanks to this rule, we can add new equations. And we can do this for more coefficients. But I'm lying a bit here because there is this term that, as I said before, this is not a variable. It's not a parameter. This is a polynomial. And in fact, the shape of this polynomial is unknown. So in fact, if this polynomial has one of the monomials is 1 over v, then we will have a cancellation here. And in fact, the right-hand side will have an additional term for the monomial 1. So now the nice equations that we had before is not that nice because we have this ugly thing here that we don't know how to quantify. But in fact, there is something we can do which is to prove that this monomial 1 over v cannot appear in the polynomial Mi. This can be done by solving an integer linear system. And this step is done with S and T solvers. And if we can do this, then this term disappears. And we have, again, the nice equation that can be combined with the Engravener basis simplification to simplify the system more. The last rule I want to talk about is the case distinction. This is done on parameters. Let's assume we have a constraint system dependent on this parameter rho i. And we split it into two cases where every case has a new equation. In this one here, the new equation is for all indexes, this parameter equals 0. And in the other case, we want the negation because we want to be exhaustive. So by doing this now, we have two simpler systems. And this is, of course, simpler because now this parameter is going to disappear everywhere. And this is also simpler because now we can divide by rho i star since it's not 0. To conclude, I want to give some details about the implementation. So our algorithm takes an input file that looks like this. I just want to show that with only a few lines, we can describe the security experiment and the scheme that I showed you before. And our algorithm is going to perform a proof search, applying the rules in a smart way. So the idea is that there is a lot of freedom on how to apply the rules. And what we do is to define a heuristic algorithm that is going to do things like, for example, whenever we want to make a case distinction, it's going to do the case distinction in the parameter that appears more in the system with the hope that we can simplify the system more. So the output of our algorithm would be something like this, where we can talk about three parts. So we have an initial goal that, after being simplified a bit, was split into two cases with a case distinction in this parameter. And the new two cases could be simplified into a contradiction. So this is a proof of security for the scheme that I showed you before. We evaluated our tool in real examples. Here, you can see the scheme that we are analyzing, the security notion, and the time to prove security. And I want to say that the first set of schemes are assumptions. And this can be already analyzed with previous tools. But for message authentication codes and signature schemes, our tool is the first one that can analyze them automatically and prove unwanted security. And we also did synthesis, so based on the previous synthesis in the generic group analyzer. And we compare our algorithm, which is in this column, with the previous algorithm proving two times security. So this is the number of schemes that are proven secure, two times secure with the previous tool. And this is the number of algorithms that are proven secure with our tool. Of course, this notion of security is stronger. But as you can see, we cannot prove security for all of them. And we don't know if this is because, OK, it may be because some of them are two times secure, but not three times secure or more. And it also be because we are solving a computationally hard problem. And in fact, we have a timeout. So our algorithm had to stop, and it could not prove the security. So just to summarize, we developed methods to automatically prove constructions, security of constructions in the generic group model. These methods are open source, and you can find them in this link. And we also evaluated the performance in real constructions and synthesis schemes. Thank you very much. Any questions? Is it the case that if the solver terminates and says it's insecure, that it actually reveals an attack as well? Sorry. So sometimes you couldn't prove things because your tool didn't terminate. But if it terminated saying it's insecure, does that also reveal an attack because it's a solution to the equations? It depends on the scheme. So for some schemes where we cannot really apply any of our rules, that can mean an attack. But for the other schemes where the timeout was stop the algorithm, we don't know. So yes, you can derive attacks, but this has to be done manually. We don't have support for deriving attacks. I mean, if the algorithm does not terminate, you can check it manually and look for an attack. But our tool is not going to give you the explicit attack. Any other question? So your tool currently does computational problems. Do you have any idea if you can extend this to decisional problems as well? Sorry, it has computational problems. But are you planning on, do you have any idea if you can extend this to decisional problems so that you can do things like encryption schemes? That's a nice question. So I don't know. It seems that we need a different approach because I don't know how you can. So you need to find a way of translating a decisional assumption into a winning constraints problem. And otherwise, you need to use a completely different approach. So I don't know. That could be the next step. Thank you. So any other question? No questions, so let's thank the speaker again. Thank you.