 Thank you for the introduction. I'll indeed be talking about short generators without quantum computers, specifically the case of multi-quadratics. And this is joint work with Jens Bauch, Daniel Bernstein, Henry de Valens, and Tanja Lange. So first I'll give some motivation for this work. In almost all lattice-based crypto papers you find some variation of this statement. Namely, lattice-based crypto is secure because lattice problems are hard, or very hard, or something along these lines. But this is a very broad statement. So naturally many researchers among which us question really like how hard are they, which crypto systems are exactly how hard. And this is not an easy question to answer because multiple attack avenues are still showing progress. So first of all, we have sieving, which was the first algorithm in 2008 by Nugir Navidic. And sieving runs in two-to-the-power C plus O of one times N time. And the first algorithm was C is 0.415. And in the seven years that follow, it's still exponential, but by now it's down to 0.292. And it's been quiet for two years now. Not sure why, maybe this is the limit, but maybe not. And quantumly you can improve this exponent even more. Then we move on to a more specific case, namely the one of cyclatomic ideal lattice problems, which even has a lot of progress happening in the pre-quantum attacks against these ideal lattices. And I guess in the next talk, there will be some more details on this, but even this year, still the running time of these attacks is going, the complexity is going down. Then quantum attacks against these specific type of ideal lattice problems are showing even more recently a lot of progress. So in 2015, Bias and Song, using a work by Campbell Gross and Shepherd, showed there's actually a polynomial time quantum algorithm against these cyclatomic ideals if they were using short generators. And in 2016, Kramar Dukar-Pijkertijn-Regref gave a general analysis that actually showed that this works for arbitrary principle ideals. And in a later work by Kramar Dukar-Weselowski, shows that you can even generalize this to any ideal. So again here, we see a lot of progress in attacking cyclatomic ideal-based lattice problems. So there is a probability that this last slide convinced you that using cyclatomics in your crypto is actually scary because a lot of progress is being made. This is not necessarily a true thing, but there are alternatives, namely the first of all, you can use LWE instead of ring LWE. So just remove the ideal structure from your crypto, which removes these problems, but the key size does go up for the same security. Another option is to use a different ring to build your crypto over. So in Andrew Prime by Bernstein-Chang-Sat-Yansup-Longa and myself, we propose to use the field modulo x to the power p minus x minus one, which doesn't have all these ring morphisms that could be scary and it is prime degree and has a large Galar group. So it's a different field to try. But instead of this talk, we investigate a specific case of switching from cyclatomic fields to multi-quadratic number fields, which is also a specific type of field which is seen a lot in textbooks because it's a very easy field to analyze. There's a lot known about this and it always looks like this. So Q adjoined with some square root of primes. So as a case study, we found a reasonable crypto graphic system that can use multi-quadratics, which was posed in 2010 by smart and verkoudre, which was based on a work by Craig Gentry and it works as follows. As a parameter, it has the ring Z joint alpha for some algebraic integer alpha. The secret key is some very short element of this ring where short is dependent on norms. And then the public key is some matrix, so a basis given for the principle ideal G times R, which is usually given in some Hermite normal form. And then in smart verkoudre, it was already shown that if you take N in the order of L squared, then you get the target security of two to the power lot against standard lattice attack. So this should hold up against an attack like BKZ to not find the short-term energy. So this looks like a reasonable multi-quadratics crypto system. It holds up exponential time against attack. And now I will dedicate the rest of this talk to trying to tell you why it's a massively bad idea to do this. And first, we need some math for this, which I will go through rather quickly. So we need a number field L, and we call its degree, the dimension when looked upon it as a finite Q factor space. The ring of integers of L we denote by OL, which is the set of algebraic integers of L and the invertible elements of this ring of integer. We call the unit group and we denote that by OL times. And then in this setting, the security of this crypto system lies in the following problem, namely given this public basis for GOL, O-L recover the small G from the ring of integers. And we do this specifically in a multi-quadratic field. And for this talk, we define a multi-quadratic field as Q adjoined by N square roots of primes, distinct primes such that the degree of the field is two to the power N, where N is this number of primes. So this is not a new problem. Many people have looked at this and the recovery strategy is in like two and a half steps. So first is to compute the unit group O-L times, which is easy for cyclotomics, but generally for number fields, this is actually quite hard. Then the second step is to find some generator U times G of this principle ideal. An algorithm for this was given already by Wuchmann in 1990, which was a sub-exponential time algorithm, where on this algorithm there's been improvement in 2014 by Bias and Bias Fiecker, but still a sub-exponential. And there's a quantum poly-time algorithm for this, which I already mentioned before. And then the second step is to actually recover G from UG and you do this by some special length, which I'll talk about a bit more in the next slide. But the idea is that you can solve bounded distance decoding by putting U times G in this lattice and then you can find U back. And in 2014, Campbell Groves and Sheppard pointed out that this is actually for cyclotomics quite easy if the positive class number is rather small. And in 2015, Scheng confirmed this experimentally and Kramadukar and Beikert Regef actually proved this. So it proves that for cyclotomics, this step is easy. Now I'll go a bit more into step two and for the rest of the talk, we'll ignore zero and one because we actually replace it in our paper by a whole different algorithm to make it all pre-quantum quasi-pollinum. So what's this log thing? Well, given a number, field L and the complex embedding, sigma one and two sigma n, with the log map is defined by taking the element x from the field and mapping it to this vector that's on the right here. So this is a vector in Rn and why is specifically this map important? Well, due to the Dirichlet unit theorem, because if you take the set of units in L and you map them like this, then it creates some set lambda and this lambda is actually a lattice and specifically a lattice of rank R plus C minus one, where R is the number of real embeddings and C the number of pairs of complex embeddings. And we can use that to recover G from H. So given H, which is some unit times G, we have the property that the log of G is actually an element of this shifted lambda lattice. And so this means we can use bounded distance decoding on log H to find log G. So I'll use the remaining x minutes to say what we replaced step zero and one with. So this algorithm is based on four ideas and the first of these ideas is actually the reason we chose multi-quadric fields because they have a property of having a huge number of subfields. So even for three a joint square roots, you get this huge graph of subfields. I actually only need three of them plus recursion and for this small example, this doesn't look like reducing the space a lot, but this grows bigger pretty quickly. The second idea is the subfield relation. So if we take an automorphism sigma of L and specifically the one that takes the loss, the joint square root and negates it, then we can define the field K sigma as the field fixed by that automorphism. And this previous example, this is like the leftmost field in the second layer. And this has the properties that the norm of any x down to this field is defined as x times sigma x. And this element is actually an element of that field. Now we can similarly define tau as an automorphism that negates the second to last square root and fixes the other ones. So that would be like the middle one and then the field fixed by sigma tau is the rightmost one. And then we can write down some equations. Namely, we can just write down the norm down to K sigma, the norm down to K tau, and we can write down what the sigma of the norm down to K sigma tau is, which is equal to sigma x of sigma of tau of x, which is actually equal to sigma x tau of x. And when we rewrite this, we get an equation for x squared. Any x in the field L can be written down as a combination of elements, of norms of elements in the fields down below. So we have some relation between subfields and elements of the big field. And we can actually use this to compute the unit group. So this subfield relation also holds for all units. And then if we recursively try to solve this, so we assume that we have the unit groups for all the subfields and we create this set UL, then we can prove that this set is contained in the unit group. But more importantly, it contains the square of the unit group. So if we can find a basis for this, then we can take the square roots and find the whole unit group. And in 1991 already, Edelman used an idea for the number field shift that actually does this. So you can define many quadratic characters and then you can use these quadratic characters because the intersection of all of them intersected with this set UL will almost certainly, if you have enough quadratic characters, give you the square of the unit group. And then you can just compute all of this by linear algebra and get the unit group. Then the last idea is to actually recover the generator and for this you first need the fact that if you have some ideal in the big field, so L, then you can quickly compute the ideal down to the three subfields. And then we apply our algorithm recursively because this subfield is also just a multi-credit field to find the generator h sigma of this ideal down to this field, which is then some unit times the generator we are looking for down to that field. And you can do the same for the other subfields, so you can compute the h tau and h sigma tau and you get the equation that's on the screen there. And by the subfield relation, this h that you are computing is actually some unit times g square. So you recursively computed some h, which is a generator of the group, not the small one yet, but you have a generator. You could almost just take a square root, but this unit is being in the way. Luckily we can again use these quadratic characters to find some small unit v such that v times h actually is a square, so you can take a square root. And then the last step is to simply take the square root and then the result is some unit u prime times g and you can use bounded distance decoding to find your g. So this results in this algorithm, which I won't go into. I just would like to point out that all the steps in this algorithm are a polynomial function of n and b, where b is some bound on the size coefficients are getting in this whole algorithm. And this b is quasi-pollinomial in the cases we studied. So this whole algorithm is actually quasi-pollinomial, where for other fields, pre-quantumly it was sub-exponential. Then this is nice, but we actually implemented this to show that it works. So in the first two columns, we see unit group computations in Sage for a towered number field, an absolute number field. And then in the third column, we see our unit group computation for this same field. And we see that where Sage stopped working for dimension 64, ours is actually still quite feasible for that dimension and much higher. In U2, we took a slightly larger multi-quartic field, so the joint primes are the first n above n squared, and the times go up, but still it's much better than what Sage used to have. And then in the last two columns, after the unit group computation, it turns out that the actual attack, so recovering the generator G, is kind of easy, comparatively. Then that it's fast, it's nice, but does it actually work, which is also important. So as it turns out, that if you take the multi-quartic fields of the first primes after one, then the probability of success actually decreases as n grows, but if you take your primes slightly larger, so for instance, the first n primes after n, or the first n primes after n squared, then this probably converges towards one, and the attack always works. Are there any questions? Yes, there's a question back there. What are the discriminants of the fields for which you calculate the unit group? I mean the size, not the value, but... So actually, to keep this lowest recursion field small, you have to keep the discriminants of the quadratic field small, and you can compute that this all remains quasi-pollinomial if you take the adjoint square roots below n to the power fifth for sure or something, even after that. So they're quite big, but that remains quasi-pollinomial so that computing a unit in a quadratic field which is like discriminant to the power half or something is still quasi-pollinomial. Another question. Sorry, and I'm giving up the microphone. What's the, when you say the probability of success, which step fails? So the G reoccurve is actually not your G, it's a G multiplied by some units. This one? So the first two rows, the first row seems to be monotonically decreasing most of the time, and then it increases and then decreases, and the last row is increasing. Can you explain this behavior? Well, not theoretically, but it seems that there are some quadratic fields which make the probability of failing a lot larger, and one of these is actually adjoining square root 2 or 3 that makes the probability of decryption much smaller, so that would explain the decrease in the first row. Then as the primes you use go up, then the probability of getting such an evil prime or composition is actually smaller, so that would probably be the reason, but I don't have proof for this.