 As I already mentioned, this work is a joint work with my PhD supervisor, Alexander Mai. And it is called Maximizing Small Root Bounds by Linearization, which is itself quite surprising because we can use such a simple thing as linearization to obtain quite good results. And yeah, the subtitle applications to small secretive exponent RSA, this is what I'm going to be talking about during this talk, so let's maybe start. Small secretive exponent RSA. Well, the RSA system is as usual defined by some public parameters N and E. And a secret parameter D, the encryption works by taking a message N and raising it to the ETH power to obtain a ciphertext and decryption similarly works by taking a ciphertext and raising it to the DTH power to reveal the message again. So if you think about something like low cost devices, you might be tempted to speed up the process of decryption. So what options do we have for that? Well, one simple method to speed up the decryption is to use a small parameter D because then we don't have to do that much exponentiation to compute the message from the ciphertext. But it has already been shown by Michael Grina in 1990 that if we choose the parameter D to be smaller than N to the power of 0.25, then this is insecure, which means we can reconstruct the parameter D by just using the public parameters. And this result has been further improved by lettuce-based techniques from Bonnie and Durfee in 1999 and they show that it is insecure if D is smaller than N to the power of 0.292. And well, what other options for speeding up the decryption process do we have? We might as well use the Chinese remainder theorem because the party who knows the secret parameter D also knows the factorization of the modulus N. So what we can do is we can compute c to the power of dp modulo p, where dp is just d modulo p minus one. And the second value c to the power of dq modulo q and combine those values by the Chinese remainder theorem to obtain c to the power of d modulo n. And if you use this Chinese remainder theorem, RSA, we might be tempted to use additionally smaller parameters dp and dq such that this is even faster. But in Johansson, Alexander May showed in 2007 that if we choose those parameters dp and dq to be smaller than N to the power of 0.073, then this is also insecure, which means again we can reconstruct those parameters. And what is the contribution of this talk? Well, in this talk, we want to investigate the results of Bonnie and Derfie on the one hand and on the other hand, the results of Johansson and May. And for Bonnie and Derfie, we will present a simplified proof technique because for those of you who know the proof of this result, it's not that easy, but in a minute you will see that we can formulate it very simply. And in the second part of this talk, we will investigate an open question posed by Johansson and May about the possibility to improve this result by using something similar like Bonnie and Derfie. Okay, so let's start with the attack of Bonnie and Derfie and let us look at how this works. We start with the RSA key equation, which is e times d is congruent to one model of phi of N. And this can also be written as one plus a factor k for the modulus times the expansion of phi of N. I've written it here as N plus one plus minus p minus q. And now let's introduce two variables, a variable x for the value k and a variable y for the value minus p minus q. And for the term N plus one, let's write capital A. Then if we take this equation and reduce it, modulo e, then we obtain a polynomial equation one plus ax plus xy equals zero modulo e. And from the point of view of an attacker, if we can, maybe I'll say this at first, this polynomial on the left hand side, the one plus ax plus xy, we know that this polynomial has a root, namely the value x zero y zero equal to k and minus p minus q, modulo e. And if an attacker is able to find this root, then using minus p minus q, we are able to factor the modulus, so we can say that this RSA is broken. Bonn and Duffy employ the technique presented by Coppersmith to find small solutions to polynomial equations and using this method, we are able to find roots of some polynomial equation which are upper bounded by some values capital X and capital Y. And because it is an integral part of this talk, I would like to give you a short idea of how the algorithm of Coppersmith works. So first of all, we start by defining some collection of polynomials, which all share the common root that we're looking for, modulo some integer, some value e to the power of m for some integer m. And the first thing to notice here is that if we take a bunch of polynomials from this collection, a linear combination of a bunch of polynomials from this collection, then this polynomial, this linear combination has also the same root. So now our goal is to find a special linear combination, namely one such that for all values small x, smaller than capital X and y smaller than capital Y, it should hold that g evaluated at those points is smaller than e to the power of m and this modulus that we defined above. Because what do we know then? Then we know that on one hand, because that's how we chose the polynomials, we have g x0, y0 evaluates to zero modulo e to the m. And on the other hand, because we defined the linear combination to be, let me say small, we know that g evaluated at that point is smaller than e to the power of m and those two points combined give that g evaluated at x0 and y0 is zero, not only modulo e to the power of m, but over the integers. And when we apply Coppersmith's algorithm to find such a root, well, the ultimate goal is to maximize these upper bounds on the variables and the maximization process is usually done by tweaking the collection of polynomials that we take in the beginning. So maybe let me give you an example for this. So this example refers to the Bonnie and Dorfie attack. So on the right hand side, you see the polynomial again that we wish to find solutions for. And we take a collection which consists of polynomials, which consists of the constant polynomial e, x times e, f, y times e, and y times f. And notice that all these polynomials share the common root that we're looking for modulo e. So we took the integer m to be one here. Okay, and what we do to find the linear combination that we're looking for, we write, we employ lettuce techniques. So we take the polynomials as basis vectors of lettuce. Actually, to be more precise, we take the coefficient vectors of the polynomials to be basis vectors of lettuce. And this looks something like this. I have to take the rows here. You see the coefficient vectors of the polynomials that we have in our collection. And each entry in this vector corresponds to one monomial. So the columns of this matrix correspond to the monomials that appear in this collection. And these columns are further scaled by the capital X and capital Y, but that's not so important here. What is important in order to find a polynomial that fulfills our requirement to be, I'd say small again, we have to derive some condition on the determinant of this lettuce. So we need to be able to compute the determinant. Well, in this case, the determinant of the lettuce is equivalent to the determinant of the basis matrix. And this is just the product of the entries on the diagonal because we have a triangular lettuce basis. So if we go on with the algorithm, it puts out some bounds on capital X and capital Y, up to which we can find the solution to the initial polynomial. Well, Bonnie and Dorothy noticed that if we take just a subset of this collection, we actually improve the bound. So for this very example, if we remove the polynomial Y times E, we get a superior bound on the bounds capital X and capital Y. But the problem that occurs is that this basis matrix now is no longer triangular, and it's not that simple to compute the determinant of the corresponding lettuce. What is actually done is to give you an intuition. So you have to multiply the contributions from the Gram-Schmidt orthogonalization. And here for the last basis vector, you somehow see the contribution of the X, Y squared is probably much larger and is the dominating term over the contribution of the Y. But to get this shown rigorously is quite involved. So especially if you think about taking a collection which is not fixed as here, but for some parameterized collection where we take the parameter M to be not fixed, but variable, then this gets very complicated. And this is exactly where our new technique comes into play and this technique has been presented at last year's ADECrypt and it's called unrevealed linearization. And it basically works in three steps. The first of which is performing a linearization on the initial polynomial. So if we take the polynomial from Bonadorf's attack, one plus X, Y plus A, X, we perform linearization which is shown here by gluing together the terms one and X, Y. And let's call this polynomial F bar. The second step is to build a lattice basis just like before. But notice that we take here the collection C prime which is exactly the collection that was suggested by Bonadorf for the improved bound. So we already removed the one polynomial Y times E here and the last step of unrevealed linearization, well as the name suggests, is to unravel the linearization. Well, what does that mean? Remember that in the first step we introduced a new variable U for the term Y plus, one plus X times Y. Well, we can write this, we can solve this for X, Y and this gives X, Y equals to U plus U minus one. And now the trick is, when building the lattice basis in the second step, every time you encounter a monomial X times Y, you replace it simply by U minus one. Okay, how does this look like? Let's look at the example. Okay, here's the collection C prime that we were just talking about and we start by plugging in the coefficient vectors of the polynomials. Okay, so far, no problem. But in Bonadorf's approach here for the last polynomial, the problem occurred where we no longer had the triangular basis matrix. Well, let's see what happens here when we take the polynomial Y times F bar. If we plug in F bar, we get U equals Y times U, U plus AX is UY plus A times XY. And now remember the replacement that we suggested. And if we plug this in here and multiply it out, we get U times Y plus A times U minus A. And if we write this now in the lattice basis, you see that the only new monomial that occurs is U times Y, so we have a perfectly triangular lattice basis and once again, let's compare this with the approach of Bonadorffy. The contributions of the first three polynomials, well, it's pretty much the same. Notice all the contribution to the determinant in each case. Notice also that the upper bound U on the left-hand side is from the size, speaking of size, pretty much equivalent to X times Y. And, well, as our intuition suggested in the case of Bonadorffy on the right-hand side, the dominating term, the X times Y squared is just transferred to a U times Y and the, let's say, error term Y does not appear anymore in our new lattice basis. So let's summarize this Bonadorffy attack in short. Okay, the original analysis was very complicated because we had the non-triangular basis matrix and there were several approaches to make this more simple, but none of them were. And in the new analysis, using the unrevealed linearization, results in a perfectly triangular lattice basis, and so we may say this is a very simple and natural analysis. I now, for the second part of the talk, I would like to look at another application of this unrevealed linearization, namely, in the case of RSA with zero G exponents. Well, in that case, we start with the two key equations. E times Dp is concurrent to one, modulo p minus one, and E times Dq is concurrent to one, modulo q minus one. Similarly, we can resolve the modular operation by introducing two new variables, k and l. And for this system on the right-hand side, we transform this system on the right-hand side into a single equation by doing the following. We take the RSA, the RSA equation, n is equal to p times q. Solve this for p and plug it in in the first equation. And finally, from the system of those two equations, we eliminate the variable q. And this results in one equation, one quite large equation for unknowns. And the problem is now to find the solution of this equation. Johansson May did so in their crypto work from 2007, and they showed that we can find the solution if Dp and Dq are both smaller than n to the power of 0.073. But they observed a strange thing, namely, they observed that their experimental results were better than suggested by the theoretical analysis. So their conjecture was, maybe we can do something like one in Darfi by taking just a subset of the collection of polynomials to improve this bound. But unfortunately, or fortunately for me, they were not able to do so. And let's see what happens if we take the unrevealed linearization. Okay, we start with exactly the same polynomial they did. And now we introduce a new variable for each monomial that occurs here, not for each monomial. We glue together the terms as can be seen here. And so this gives us your five equations, five equations defining the newly introduced variables. And if you look at those five equations, if we count the unknowns, we get nine unknowns. So we try to eliminate the, let's say, old variables, Dp, Dq, K and L. And if we do so, if we eliminate those, we get a relation between the new variables. And you see this is slightly more complex than in the case of one in Darfi. But what we do is pretty much the same each time when building up the letter spaces, each time we encounter the monomial Vwx, we replace it by the term on the right-hand side. And as it turns out, in this case, the theoretical analyzer perfectly matches the experimental data. To show you this on a small graphic, we see the dashed line is the result from Jochen's and Mai, where I plotted the bound for Dp and Dq that we can achieve for a specific lattice dimension. And our new approach is the blue line, which is drawn above. And yeah, you see this is, for all lattice dimensions, a little bit better. And this also has a little bit more practical influence because this is pretty much all of lattice dimension that is achievable in practice. So, and here for each lattice dimension, we are quite a bit better. So we might say that the unrevealed linearization captures the sub lattice structure, the sub lattice being just the removing of some of the polynomials in the collection. But unfortunately, if you draw those two lines into infinity for the lattice dimension going to infinity, the asymptotic result is not improved. So, different from the case of Bonnier-Durphy, we were not able to achieve a better result, but as I just said, for practical applications, this might really be good, yeah. So, all right, let's quickly summarize what we have. Showed you some applications of unrevealed linearization, the small secret exponent RSA. Started with Bonnier-Durphy attack and small parameter D, and we may say that it used to have a very complicated analysis, and now using this unrevealed linearization, it became very simple and very natural. And in the second part, we looked at the Jochen-Smeier attack on RSA CRT, and we can say that we closed the gap between the theoretical and experimental results. And unfortunately, we answered the question about improving the bound in the negative. Well, what can we say about unrevealed linearization in general? I think it's a very simple technique and it is probably useful in many other lattice-based attacks. So, thank you.