 Hi, I'm Kunde Boer, this talk is about random self-rediscibility of ideal SVP via Arrakelof random walks. This is joint work with Leo Duca, Alice Palémarie and Benjamin Wieselowski. This talk will be about a worst case-to-average case reduction for this shortest factor problem in ideal lettuces. We do this by considering the space of all ideal lettuces which is called the Arrakelof class group. In our main result we consider the Hermit shortest factor problem instead of the ordinary shortest factor problem. In the Hermit shortest factor problem we ask for a vector that is short relatively to the volume of the lattice, which is a more absolute notion and to avoid having the volume of the lattice in the equation we just assume that a lattice has volume 1 which is also particularly nice in the context of Arrakelof theory. So in our context the Hermit SVP problem reads as follows. Given a lattice of volume 1 find a vector shorter than gamma where gamma is a positive real number. Our main result specialized to a cyclotomic range of rank n reads as follows. If you can solve gamma-hermit SVP over random ideal lettuces you can also solve gamma-prime hermit SVP over any ideal lattice. But with the cost of a small blow-up factor. Under the extended Riemann hypothesis and mild assumptions on the class number of cyclotomic fields this blow-up factor can be shown to be within a big O of a square root of n. This theorem essentially states that worst case and average case ideal lettuces in cyclotomic rings are approximately equally hard to solve hermit SVP on. So there's not a big variation in hardness of solving hermit SVP in ideal lettuces. How does our result relate to the literature? Well in 2010 Gantry also derived the worst case to average case reduction for ideal lettuces. We improve on this result in three ways. 1. Our reduction is tighter. The blow-up factor is smaller. 2. Our description of the average case distribution is mathematically more pleasant. We use the uniform distribution over the Arkeloff class group. And 3. We do not require a factoring oracle. So our reduction can also be done in the classical world. The machinery of Arkeloff class groups is actually already used in recent cryptanalysis, though not explicitly stated so. In those works it is viewed more in an algorithmic sense, using a matrix of relations in a combined class group unit tourist object. I would like to stress that the strategy of the proof of this self reduction in this work is exactly the same as those for the discrete logarithm problem in elliptic curves. In those works they show that the discrete logarithm problem is essentially equally hard for all Isorchini classes. This is proven by a random walk over the Isorchini graph. To define ideal lettuces we need orders. Those are sort of discrete rings. And to get started we are going to consider real quadratic orders first with the principle ideal property. Real quadratic essentially means that the order consists of just the integers with some extra number, a square root of a positive integer. Think of the square root of 2. The principle ideal property means that all ideals are principle. That means that they can be degenerated by one element. Coincidentally this property means that the class number is 1. Orders being real quadratic means that they can be embedded into R2. And by R2 we mean the ring R2 with component wise addition and component wise multiplication. As the order must contain z, the ordinary integers, we have to put that first. And the only possible choice is along the diagonal. 1 sends to the pair 1 1. It acts as the unit on both axes. There's also the square root in this order, which is sent somewhere not on the diagonal, in a very specific way. And this fixes the entire embedding. Note that this embedding makes the order into a lattice and a ring. Having the order properly embedded, how do we construct an ideal lattice? To obtain an ideal lattice we just multiply each point in the order by an alpha in this embedded space, the ring R2. Because we have the principle ideal property, we can reach any ideal of the order in this way. By varying alpha, we obtain many different ideal lattices, like this. Though in the beginning of this talk we mentioned that we only wanted to consider ideal lattices with a fixed volume. But as this lattice is more sparse, the volume, or rather co-volume, is increased. And to resolve this, we scale the lattice back. So if we want only to consider lattices with a fixed volume, we must constrain ourselves to alphas on the hyperbola, where the negative branches are omitted, because we can just multiply by minus 1. So up to scaling we can get to any ideal lattice in this way. But we do not only want to consider lattices up to scaling, we also want to omit lattices that just look like, that are isometric lattices. They have the same lengths, the same angles, so they are not substantially different in a geometric sense. So we can forget about the left part of this hyperbola as well. So to define an ideal lattice up to isometry, we just have to choose an alpha on the right part of the hyperbola. Using a parametrization of the hyperbola, we can actually parametrize the space of ideal lattices with a single real variable. But there's more. Not all of these t define different ideal lattices. In fact, after a while, two different t's define the same ideal lattice. It's a periodic thing. So to parametrize the space of ideal lattices, we don't even need the full real line. We just need a segment with endpoint joint. And that's a circle. So the space of all ideal lattices up to isometries is just a circle, for real quadratic orders with the principal ideal property. The space of ideal lattices up to isometry is called the Arakailov class group, denoted here with Picard zero. For these specific real quadratic orders with the principal ideal property, the Arakailov class group is isomorphic with the one-dimensional torus, the circle. If we remove the quadratic assumption, that means if we allow orders of higher degree, the embedding space will have a higher dimension as well, say r. Then the Arakailov class group will be a torus of dimension r minus one. We lose one dimension due to the rescaling, the renormalization of the ideal lattices we consider. If we remove the real assumption, the order needs complex components to be embedded in. The c here denotes the number of conjugate pairs of embeddings. The Arakailov class group is still a torus, but slightly bigger in dimension. By the way, note the loss in dimension again. The ambient space has dimension r plus 2c over the reals as a vector space, but the torus only has dimension r plus c minus one. This loss is because of modding out the isometries. On each complex dimension there is a unit circle that doesn't really change the length of the respective components of the lattice. If we remove the principal ideal property, that means if we do have a non-trivial ideal class group, things get slightly more involved. In this case we get many copies of the torus, as many as there are class group elements. I want to stress here that this direct product of the torus and the class group is only meant in a topological way. The underlying group structure is not a direct product. Why Arakailov class groups? Well, we want to think of ideal lattices as points in this group that are related by the group structure. So it will be of importance how to move or how to walk from one lattice to another one. And this random walking is how we get from the worst case lattice to an average case lattice. So our reduction algorithm randomly walks over the Arakailov group, like this. When it reaches the average case it asks for a shortest vector, v. Then it moves back along the path, but the vector v will become bigger each step back. So it is of importance that a walk isn't too long. The longer the walk, the larger the increase of the length of the vector v. But how do we move or walk in the Arakailov class group? There are two ways of walking. The first way is very continuous. We already have sort of seen it. Parametrizing the set of ideal lattices as a circle, moving along the circle continuously, transforms one ideal lattice to the other. If we try to do a worst case to average reduction only with this continuous walking procedure, it wouldn't work. This is because this way of walking distorts the SVP problem very much. The torus is just too big to reach with only continuous walks. So we need another way of walking. This one is more discrete. The walking happens by means of sparsification. This consists of removing points from the ideal lattice in such a way that the remaining points also form an ideal lattice. In this way we get a different but related ideal lattice. Because of the sparsification though, we increased the volume. So we need to scale it back to get back again in the Arakailov class group, which only consists of ideal lattices with volume 1. Calling the SVP oracle gives a solution in the sparsified lattice. To get a solution in the original lattice, we need to undo the scaling and place the points of the original lattice back to undo the sparsification. Note that again we increased the size of the vector slightly when going back to the original lattice. But you can see that the defect or deformation of the shortest vector problem is way less severe here than in the continuous step. So to summarize, there is a continuous way of walking and a discrete way of walking by sparsification. By combining these two ways of walking, we can sort of cover the entire Arakailov class group without deforming the SVP problem too much. The intuition behind this is as follows. The sparsification walk looks like jumping to random points on the torus. So after a few sparsification steps, we have a number of points quasi-randomly distributed on the Arakailov class group. The continuous walk sort of blurs out those points. And the hope is that having enough dots and by blurring enough, one finally reaches the uniform distribution. In the remaining part of the talk I would like to explain the following. How to prove that the combination of discrete and continuous random walks on the Arakailov class group indeed tends to the uniform distribution. Let's consider the one-dimensional torus again. Now seeing it as a segment. We will now look at distributions on this torus. We want to show that the distribution that comes from the random walk tends to the uniform distribution. In the current image, we have a distribution where all of the weight is at a single point, a single ideal lattice. This is also known as a Dirac distribution. After applying a continuous random walk in the form of a Gaussian noise, the distribution changes into this Gaussian distribution. By the way, it does not really matter whether to start with the continuous walk or with the discrete walk. Remember that sparsification consists of taking an ideal sub lattice. On the Arakailov class group, the torus here, sparsification has the effect of shifting the distribution by a specific shift. In our random walk, we will sparsify with primes and each prime has a specific shift. The primes 2, 3 and 5 have, for example, these shifts. Sparsifying by a randomly chosen prime among those three results in a sort of average over all shifts, like this. And if you apply and apply this again, this yields the uniform distribution or something close. Taking a distribution and averaging over different shifts can be considered as an operator on the space of distribution. This operator is what we call here the averaging operator. It acts like this. Note here that the uniform distribution or any constant function is fixed by this operator. In other words, the uniform distribution is an eigenfunction of this operator with eigenvalue 1. The constant function is not the only eigenfunction of this operator. All imaginary exponential functions with integer frequencies, essentially cosine and sine functions, are also eigenfunctions of the averaging operator. By these manipulations here, you can see that the eigenvalue of these imaginary exponential functions is the average of points on the unit circle. And it has to be a complex number with norm less than 1. We call these eigenvalues lobster n. But imaginary exponential functions are very special. They are the basis functions in Fourier analysis. We can decompose any function on the circle into an infinite sum of those imaginary exponential functions with specific coefficients. This is called the Fourier decomposition. So we are going to do that with our initial distribution, the Gaussian distribution. We write this Gaussian as a specific sum of imaginary exponential functions with integer frequencies. Because the Gaussian distribution is so smooth, only the low-frequency functions really matter. The high-frequency functions only have a negligible part, as you see I stop here at frequency 10. The averaging operator has the effect of multiplying all the imaginary exponential functions with their respective eigenvalue. Indeed, because these exponential functions are eigenfunctions. Note that the constant function is fixed, and all other eigenvalues are in absolute value strictly smaller than 1. This means that if we apply the averaging operator many times, the eigenvalues get raised to the power k and tend to 0. So eventually we can remove all non-constant terms, and we are left with only the uniform distribution. So essentially after enough applications of the averaging operator, the resulting distribution is very close to uniform. But we know that every specification step, every application of the averaging operator, disturbs and deforms the shortest factor problem in some sense. So we would like to have as least as possible applications of this averaging operator. And for that we need the eigenvalues to be sufficiently bounded away from 1 in absolute value. Let's take a closer look at those eigenvalues. Essentially those are averages of numbers on the complex circle, a sort of center of gravity. The specific places of those numbers depend on the frequency n, but also on the specific shifts that are associated with the prime numbers involved. To have this eigenvalue lambda n reasonably small, we need a sort of equidistribution of those prime shifts, a sort of regularity on the prime numbers. And that is why we need the extended Riemann hypothesis, to show that those prime shifts are equidistributed on the torus. So assuming the extended Riemann hypothesis, we know that the absolute value of the eigenfunctions are bounded above by a half, whenever the frequency is not too large. Therefore we can conclude that we do not need too many repetitions of the averaging operator, and therefore the HVP problem is not disturbed that much, which is exactly what we wanted. Thanks for watching and see you at the next crypto.