 Hey, my name is Russell Gorda and thank you for watching this video. So in this video I will be discussing our recent work on entry fatigue, how stretch is overstretched and this is joint work with Leodica. So let's get started with an overview. Entru is a let's base poly-key crypto system of which many variants exist. One of which, simply known as Entru, is a NIST post-concrypto-finalist. And to recently, that is the reduction attacks result to be as similar as on the real LWA. However, for large overstretched more like you, let's say reduction attacks have been shown to behave even better. And the main question of our work is to understand when do we go from the spender stretched to this overstretched regime, as that is currently unclear. So our contributions are, we explain precisely how let's reduction breaks overstretched Entru and we predict precisely when that reduction attacks break overstretched Entru. So what's exactly the Entru problem? Well, first we have a secret key that consists of small elements f and g in some ring r. Then we have a public key h that's given by g times f inverse modulo q for some modulus q, assuming that f is invertible. And throughout this presentation you can assume that r is equal to zx quotient by x to the n minus 1 and that f and g are just like polynomials with coefficients in minus 1, 0 and 1. Now the Entru problem, as to given the public key h recover the secret key f and g. Or any rotation of them as that's kind of equivalent. Alternatively, given the public key, you could just ask to find any shortest pair a.b, such that h times a is equal to b mod q. And note that the secret key satisfies this. But for any short pair we might be able to break the crypto system already. To apply let's reduction attacks, we first need to define the so-called Entru letters. This is exactly the letters that consist out of all such pairs a.b, such that h times a is equal to b mod q. So if r has to create n, then this letter says dimension two times n and the determinant is equal to q to the n. Now this letter says two special properties. First it contains the secret key and all its rotations and these are for most of the pair means unusually short factors in this letters. And secondly, these rotations also generate an unusually dense sub letters of rank n inside of these two n dimensional letters. Now the first property gives us the best attack for a small model of q. And it's similar to attacks on unique SAP or we call unusual SAP. And for the second property, that gives us the best attack for large model q. And this is actually what we call the overstretched regime. And our question is what's the crossover between these two attacks? And we define this as the fatigue point. So let's reduction is the process of turning a basis consists of long and not so orthogonal factors into a good basis of short and almost orthogonal factors. So given a basis b that consists of factors b0 up to b minus 1, we first define the projection P y away from the first I basis factors. And this allows us then to define this about grunge speed basis where bi star is given by the bi base element projected away from the previous base elements. Note that we have the invariant that the product of all these grunge speed basis elements, the norm of them, is equal to the determinant of letters. And this means in the picture that the volume underneath these two log plots must be equal. Now for a bad basis consists of large factors. This base, this profile starts very high and then quickly decreases. For a good basis, the profile is much flatter. To obtain such a good basis, we can use the BKZ algorithm. So first we have to define the so called projected sub lattice between L and R. So we just take the basis element L up to R minus 1 and we project them away from all previous base factors. And then the BKZ algorithm tries at any any position Kappa. It looks at the block, the projected sub lattice between Kappa and Kappa plus beta. And then it tries to find the shortest factor in this projected sub lattice and lifts it, lift it back to the full lattice and then replace this base element there. So what does that mean in practice is that we find this short factor here and we decrease the profile at this point. And of course, because of the invariant on the total volume underneath this plot, we also get some changes in the rest of the profile. And if you repeat this, then for a large block size beta, you get flatter and flatter profiles. However, the cost of this reduction is exponential in the block size. So we have to be careful with that. So for the rest of the presentation, we will account for the complexity of solving entry problem in terms of this block size beta. And what's nice that the behavior of BKZ is pretty well understood for so-called random lattices. And what we can do is we can see that these are almost form a straight line. And the angle that they give can be predicted by the so-called geometric series assumption. So let us first focus on the unistracted regime where BKZ finds unusually short secret key. So BKZ is expected to find this initially short factor when the projection to this last block at position d minus beta is less than the crunch meat arm there because then the projection is actually the shortest factor in this block. Now the blue line here represents the expected length of this projection. And the orange line is the GSA of, in this case, beta esteem BKZ. Now we see now that at this position, the expected length of the projection is much larger than the crunch meat arm. So we don't expect BKZ to find it. However, if we increase the reduction, then at some point these lines cross. And at this point, we are expected to find the projection of this short factor. And now we also, it's also very likely that the short factors actually lifted all the way back to the full context. So we will cover the unusually short factor or the unusually short secret key. So this is better known as the GSA intersect method or the 2016 estimate. And BKZ finds unusually short factor when beta is at least an overlock queue up to logarithmic factors. Additionally, you can turn this analysis into an average case, a heuristic analysis, and that gives concrete predictions. And if we apply this to our entry problem, we can give very precise predictions in the low, in the understretched regime for low queue. However, we see that in the overstretched regime for large queue, this prediction doesn't match. And this fatigue point is already at quite low values of Q. Okay, so let us now consider the second attack. Recall that the entry lattice has a dense n-dimensional sub-letters. And our question is when does BKZ find this n-dimensional sub-letters in some sense. So Kirchner and Fug try to answer this by using the following buttocky to roll emma. That says that for any n-dimensional sub-letters, we have the following constraint. Namely that the determinant of this sub-letters is at least the product of the n smallest cross-meet-norms of the full-letters. So to turn this into an attack, we also need to abuse the fact that we know quite a good basis of our lattice. So using only public information, we can create a basis of which the first n cross-meet-factors have norm Q and of which the last n cross-meet-factors have norm 1. And this kind of gives a C-shape. Now if we apply reduction to this basis, then what we see in practice is that these flat parts stay the same but in the middle we get kind of a slope. And this slope can again be predicted by the geometric series assumption. So if we upgrade our Gmax series assumption, we kind of get a C-shape, geometric series assumption. So what does the patekitt roll emma say in combination with the C-shape? Well first, the determinant of this n sub-letters can be represented by the area of this rectangle. On the right side, we have the product of the smallest n cross-meet-factors, which are just the last n cross-meet-factors assuming this is C-shape. And so that's just the area underneath this plot. But now if we increase the reduction, then at some point this inequality can't be true anymore. So something must have happened and Kirchner-Fuck reason that at such a point we must have found the n sub-letters in some way. And note that this is quite an unclear statement. But if you run this analysis, then what Kirchner-Fuck got that pKc finds the n sub-letters in some way when the block size beta is at least n over log squared q. And note that now we have this log squared instead of just the log q. So for growing q at some point this attack must become better than the first attack. And indeed, in practice this breaks several FHC schemes that use very large, small-like q. And if you run the analysis for this fatigue point, then for ternary f and g, the fatigue point is supposed to lie at about n to the power 2.783. But note that this is just an upper bound as the analysis is just a worst-case analysis. And also we have no clue what's hidden inside of this little o of 1. So a few problems with this method is first we have no clue how pKc actually finds this n sub-letters factor, only that something must have happened. Secondly, it only gives an upper bound on the fatigue point. And also, because this lemma is quite a worst-case statement, we can give no concrete predictions using this and it's far from actual practical behavior that we see. So given that we want to understand how and when pKc solves the entry problem, let's first see what happens in practice. But for this, to run experiments, we first need to define what we are looking at. And for this, we define some events. So first we define the secret key recovery event at some position kappa. And this triggers when a vector is short as the secret key vector, so the secret key vector and the rotation, is inserted in the base at position kappa during pKc. Secondly, we have to so-called dense sub-lets discovery event. And that triggers if a dense sub-letters factor from this sub-letters generated by the secret key, longer than the secret key is inserted in the bases at position kappa. So we run pKc on the entry-letters with n equal to 127 for different mole-like Q. And we looked at which block size it was first solved. It found the secret key or the dense sub-let discovery. So we see that in the understruct regime, we mostly see skr events. Well, in the overstructured regime, we mostly see these dense sub-lets discovery events. Now, in terms of positions, we see that for the skr events, that they mostly happen at around the dimension minus beta, exactly as predicted by the first attack. For these dense sub-lets discovery events, we see that these happen at positions much more central. So a bit close to n. And in the round of fatigue point, we see that both events happen. So at high position, mostly skr events, low positions, mostly dsd events. And what's important to know is that after such a dsd event, we saw in practice that pKc quickly recovers the full secret key. But even if this does not happen, we explain in our paper how you can give in such a dense sub-lets factor can recover the secret key with much more efficient algorithms. So we have observed in practice that in the overstressed regime, pKc solves and through problem by first finding a dense sub-lets factor at some position kappa. So let's try to turn this into an estimate. So for such a dense sub-lets factor that's inserted at position kappa, note that it must be generated by the first kappa plus beta lattice factors, or base factors. So why don't we just assume that this factor is indeed the shortest factor that is generated by these first k plus beta base factors and that also lies in the dense sub-lets. And this is exactly our estimate. And following the same methodology of the 2016 estimate, we then say that pKc triggers this DSD cap event if the projection of this particular dense sub-lets factor is less than the ground-speed norm at the position kappa. And note that the ground-speed norm of this factor because we assume it's the shortest one is given by the first minimum of the intersection between the dense sub-lets and the let's spend by the first k plus beta base factors. And to estimate this first minimum, we have the Minkowski bound that gives a bound of the first minimum in terms of the determinant of this lattice. So in our case, we need to guess the determinant of this intersection. And to achieve this goal, we made a generalization of the patiki-trol lemma that says that for any n-dimensional sub-lets L prime we can bound the determinant of this intersection with the first s-base factors and this dense sub-letters. And this is always bounded by the determinant of this dense sub-letters multiplied again by some thing that depends on the cross-meat norms. And to see that this generalizes the patiki-trol lemma let's just assume the case that k is 0 and then in the proof you can assume that this is equal to 1 and then this gives exactly the old patiki-trol lemma. So we can analyze when pkc inserts such a dense sub-lets factor at any position kappa and optimizing over this position we find that the best case at position around n-beta of 2. So what we find is that pkc finds a dense sub-lets factor at position n-beta of 2 when beta is at least n over log squared q. So in conclusion, dotically, we still get kind of the same results. However, if we zoom in closer and take q equal to n to the power of q and the size of our secret key about n to the s and beta is linear in n then we do actually see some improvements namely with the old Kirchhoch estimate we got that b must be at least 8s over q squared and we improved it with a plus one in the denominator. So what this means is that the fatigue point instead of lying at n to the power 2.783 it now lies at n to the power 2.484. But more importantly because our analysis explains precisely what happens, we can turn this into heuristic average case analysis and use this to make concrete predictions. And we can see that now instead of just giving good predictions in the understruct regime we can also give very concrete and precise predictions in the overstruct regime. So given that we now understand the understruct and overstruct regime we can also easily make predictions for the fatigue point. So for ternary secrets F and G, the concrete fatigue point seems to lie at around 0.004 times n to the power 2.484 and note that this fully explains why this fatigue point already lies very low in the experiments that we did. However because the exponent is still rather large this fatigue point lies much higher than the NIST parameters being used. So all the schemes that are being used now indeed lie in the understruct regime. So what's important to note in the overstruct regime the security really depends on the volume of this dense sublattice. If you have a normal or random ternary key then the volume of this dense sublattice can vary a lot. Even the security in terms of the block size can vary from slightly below 30 all the way up to 50. So if you truly want to understand the security in this regime then just an average case analysis might not be enough. So while our analysis captures most of the events that we see in practice and also allows us to give very concrete predictions that match experiments there are some events that we can't fully explain. So we see that most events indeed happen around N- the predicted block size and that's captured by our model. However some of these DSD events happen at higher positions and these are not captured by our model. Additionally we note that the factors that are inserted here are much longer than the ones we find here. However their projection on this block is much much smaller than what you would expect. So the probability of this happening is very small. However there might be a lot of these factors and that might account for this probability. And then also you have very small probability that actually this factor is lifted back to the correct long dense sublattice factor and that's why we call them lucky lifts. So in our work we give some starting points on how to run the analysis for these things. But at the same time for experiments these things don't happen at any other block sizes than what you already predict. So so far they don't seem to matter that much. So let's get this to the key takeaways. So we can now give concrete predictions for all values of the model's Q both in the understretched regime. We now fully understand the fatigue points and while it lies much lower than expected it still lies well above nest parameters. And you have to be wary about the large variants in the volume of this dense sublattice as this has a large influence on the block size that's required. So the code for all experiments is available at this address and thank you for watching this video. Here you can see the bibliography. Thanks. And I hope to see you soon at the fiscal conference.