 Hello, I am Melissa Rossi, and I am GoFuiting. Together we are going to present a joint work with Dana Dahman Solet and Leo Ducat. We will show that site information can be integrated in the so-called primal lattice reduction attack. This work has a theoretical part to build and validate a new framework, and it has many applications in cryptanalysis. In this talk, we will mention site channel attacks, decryption failure attacks, and even structural attacks. Let us start with the motivation. Consider a learning with error-based cryptographic scheme. To estimate its security, one needs to estimate the cost of the lattice reduction primal attack. This cost is represented by a parameter, the block size of BKZ algorithm. This block size is denoted beta. Once this beta is found, there are many cost models to derive a bit security. We will not go into details here, but most models suggest a multiplicative factor of around 0.3. At the end, we obtain a bit security which represents the amount of work needed for a key recovery. Let us now consider that same algorithm, but from a site channel perspective. The measurement of some physical parameters give extra information. Typically, we can measure some power consumption traces. At the end here, one can quantify the site information necessary to recover the full secret key. For example, a key recovery needs at least, say, 100 traces. Let us now imagine that we are in the middle. Here is the site information. For example, we have one or two traces, but they do not allow any key recovery. If we can include this slight information given by the trace in the lattice reduction, we can hope to decrease the bit security and we end up with a hybrid attack. The purpose of our work is precisely to include this little information and compute the new block size beta. This site information will be called hint. The integration technique has been done in ad hoc ways in the literature. Here, we aim at systematizing it and predicting the security drop for any instance. Our main contribution is two-fold. We first propose a framework to integrate site information and its sage implementation. And secondly, we give various examples of applications. We apply it on a site channel attack where the amount of site information is large as represented in red here. Contrary-wise, in another example, we derive a small hint by triggering real-world specifications. Our framework can also be applied to decryption failure attacks. The more failures, the less work. In the first part, let us analyze the framework. After learning from practice, you can go directly to the application part at the end of this presentation. In this video, all geometric intuitions will be given in two dimensions. The bounded distance decoding problem works as follows. Given a lattice here in white, find the lattice point that belongs in the green circular area, the red dot. For the purpose of this work, we had to slightly distort this problem. Instead of considering a circular area, we consider an ellipsoid. More precisely, we are given a lattice represented with the white dots here, a center denoted mu, and a covariance matrix sigma. The eigenvectors of the covariance matrix will give the symmetries of the ellipsoidic distribution. In this problem, we still need to find the red dot. Each instance is defined by three parameters, the lattice, the center, and the covariance. There are some particular cases. For example, when the covariance is the identity, we fall back to bounded distance decoding. When the center is zero and the covariance is the identity, we fall back to the shortest vector problem. To lead to attack a learning with error instance, one transforms the learning with error instance in a shortest vector problem instance. The latter can be attacked with the lattice reduction. Let us use our dbdd definition and integrate the hints. Instead of embedding directly into a SVP instance, we embed the LWA instance into a dbdd1. This gives us more flexibility to progressively integrate the hints. Finally, we transform the final dbdd instance into a SVP1. Let me present the first step. The technique is standard. Assume that es is the solution of a LWA instance. We can constrict the following lattice in which es1 is a short vector. Variance and mean are defined with the variance and mean of the error. Except for the last coefficient, which is 1 with covariance 0. We finally have our dbdd instance. Let us see the last step geometrically. Assume that we have our dbdd instance and we want to apply lattice reduction. First, we will homogenize the instance with the following equations. Then, we will isotropize the instance by distorting the lattice to end up with a circular distribution. Now the issue boils down to the middle part, which is the heart of our work. My colleague Eugene will present you the geometric intuitions of this part. Next, I'm going to show four types of hints formalized in our framework. So for the perfect hints, that is, attacker is given a vector v, and I know the inner product of the secret vector and v. Modular hints have a similar form. Here, attacker knows the inner product, but a modular integer, and the integer is known. In approximate hints, attacker is able to learn the noisy version of the inner product. The last type of hint is a bit special. Here, attacker does not know the linear relation of the vector secret, but knows some other information about the lattice. During this talk, I will be focusing on illustrating first three types of hints. Now let's see how these hints get embedded into the distorted version of bounded distance decoding instance. Recall that in dbdd problem, we want to find a non-trivial lattice point within the ellipsoid. The imperfect hint is introduced. For a simple example, in a two-dimensional lattice, attacker learns the sum of all the secret coordinates equals to zero, which can be viewed as an inner product of the secret with the vector 1, 1 equals to zero. Some extra information can be learned from this inner product relation, as the value L is the projected length of the secret vector on B. Assuming V is already normalized to be a unit vector. Now we know that the secret vector S must be contained in a hyperplan that is orthogonal to V. By intersecting the lattice with the hyperplan, we get a new lattice. We also change the mill and the covariance correspondingly based on the conditional information. And finally, we get a new dbdd instance. For the new instance, the lattice has a dimension decreased by 1 and has the volume increased by a vector of V. As a result, by integrating a perfect hint, we get an easier dbdd instance to solve. Now let's look at the modular hint. Recall that modular hint has the similar form as the perfect hint. It is in the form of a linear product of a secret vector S with V, modular and integer. For a simple two-dimensional example, let's see a taker knows that the difference between two coordinates of the secret is divisible by 3, which can be formalized as a modular hint. The geometric intuition behind the modular hints is actually very similar as the perfect hints. In which way, the modular hints equation is built as an inner product equals to L or L plus minus K or L plus minus 2K and so on. And each of these possible inner product values contributes to a hyperplan that the secret lattice point may lie on. And these forms are infinitely many hyperplans that are all orthogonal to V and are equally spaced. We can then specify the lattice by intersecting with the union of the hyperplans. We modify the covariance a little bit and keep the center mu the same. Now by integrating the modular hints, we derive a new dbdd instance. This new instance is actually an easier problem. As the volume of lattice is increased by a factor of K, the third type of hints is approximate hints. A taker learns the noisy inner product of the secret vector with a known vector. We see this noise as standard deviation sigma. For a simple example, if a taker learns the first coordinate of the secret is around 2, then this is a good case to use approximate hints. For the geometric intuition, given a vector V, we draw a same hyperplan as we did in perfect hints. However, since we only know this inner product value up to some error, the secret may not be on the hyperplan, but it is close to the hyperplan. To integrate the approximate hint, we keep the same lattice but change the center and covariance of the ellipsoid shape according to this conditional noisy information. As a result, the new dbdd instance has a smaller ellipsoid and is easier to solve. That's all for the hints, and you may see that each hint may affect the dimension and the volume of the lattice and the covariance of the ellipsoid in a predictable way. So, let's check how is our prediction, and we also want to see how much security loss from integrating those hints. We implement our framework in Python with Sage, and we have three implementations under the same API. For the full flashed versions, it keeps track of lattice spaces, center and covariance of ellipsoid, and can launch a full attack at the end under feasible dimensions. It also provides prediction. The other two are lighter versions. They don't track that much information, but predict the security loss based on the changes of the volume and the dimensions of lattice when integrating each hint. The last version is even faster but with more restrictions. Melissa will give a demo on how to use this framework implementation at the end. Now, let's look at our prediction versus experiment. From the figures, you can see that for each type of hints, we are able to have the prediction very close to the experiment result. For each example, we start from two different dimensions. As the number of hints integrated into the instance is increasing, the blue curves overall keeps to be very close to the red curves. Next, my colleague Melissa will talk about the real-world applications. Here comes the application part. The types of hints presented by Eugene may seem a little useless for real-world situational attacks. It is true that leakages are never linear in the secrets. However, with some transformation, one can use them for concrete cases. Let us see how it works on a simple example. Consider a secret coefficient S i between minus 5 and 5. After a poor analysis, an attacker can learn the hamming weight of S 0, for example. Say, hamming weight of S 0 equals 2. Then there are only two choices for S 0, either 3 or 5. This is quite a significant information, and we can include it with hints. A first modular hint can reduce the possibilities modulo 2, and a second approximate hint can focus the distribution around 4. This can be generalized to other side-channel attacks. We, for example, use the data of the first attack presented by boss Friedberger, Martellini, Oswald, and Stamm in SAC 2018. In this paper, the authors present two attacks. While the second attack was successful, the first attack did not work because the leaked information was too weak for a full key recovery. So it is a perfect use case for our tool. The side-channel is a single-trace template attack. The attacker obtains an aposterior distribution of the secret. For example, he recovers S 0 equals 2 with 80% probability and S 1 equals 0 with 100% probability, and so on for the other coefficients. Even if the probability is very high, it is very expensive to recover the secret key with brute force. However, this information can be transformed into approximate and perfect hints. For the approximate hint, we derive a center and a standard deviation of the aposterior distribution. Finally, our framework allows to heavily decrease the block size. The unit here is the noted base. It represents the block size. We also propose a probabilistic attack with guesses. When the probability of an approximate hint is very high, we transform it into a perfect hint. Here we are in a case where there is a lot of side information and this allows to drastically decrease the complexity of the lattice reduction. As a second example, the decryption failures also provide side information. Indeed, a decryption failure happens if the scalar product of the ciphertext and the secret is anormally large. This can be integrated as approximate hints. Without a tool, we can reproduce the work of Danvers, Guot, Johansson, Nielson, Vercauteren and Verbaovéday from PKC 2018. And we obtain similar results. In the x-axis, we have the side information. In other words, the number of failures. In the y-axis, we have the amount of work for the attack. The more failures, the easier the attack gets. Another application was quite surprising. Some hints exist by design. Several real-world schemes like Entru, Lag or Round 5 use fixed-weight ternary secrets. This naturally induces a perfect hint. The integration of these structural hints slightly benefit to an attacker. We can see that the block size decreases by 1, 2 or 3. The lattice reduction work is still very high though. Let me finish this presentation by demonstrating our tool in its full-fledged version. It needs SageMath 9.0. We first load our framework and build a LWE instance. The tool creates the dbdd instance associated and estimates the attack by computing beta and delta. It estimates a block size of beta equals 45. The instance is small so we can actually attack using PKC. A block size of 46 was enough to recover the secret. Let us do the same with two randomly generated perfect hints. Beta is now 38. We now integrate two random modular hints and approximate hints. Now the block size is estimated at 35. We run the attack again and end up with a block size of 39. The hints helped to have a more efficient attack. Our tool is available on GitHub. Feel free to use it and improve it. Thanks for watching!