 So this joint work with PG-Studion for Skangar. So on this slide you see block ciphers. This is plain text. This is the first round of block ciphers. Those are internal rounds. This is the last round. And we look at some bits which is the output of the first round. Those bits are input to the last round. And we find a joint distribution of these two vectors. These are vectors, not one bit. And this distribution actually depends on all these key bits. These are round keys. We're able to find some approximation to this distribution. And we want to use this distribution in a kind linear cryptanalysis. So if the size of this vector is large or the size of this vector is large, so observation of the x and y depends on so many key bits from the first round key and the last round key. So conventional linear cryptanalysis, multivariate linear cryptanalysis is not efficient. This cryptanalysis uses logarithmic likelihood ratio statistic. And this case observation depends on so many key bits. And distribution itself depends on some key bits. So that statistic itself depends on so many key bits. And to use this statistic, you need to power the size of the key bits computation. This is too expensive. So instead, we propose to use a number of projections. We know distribution of this vector, so we can compute distribution of this vector, these are projections of this vector. We use only linear projections. This is more easy to use. And each projection may depend on lower number of key bits. And that is a quite important fact. So we are able to construct vector of logarithmic likelihood ratios for each projection. And we know distribution of this vector asymptotically. If value of key is correct, this is a priori distribution, this vector distributed as a multivariate normal vector with parameters which are easy to compute. And otherwise, if the value of this key bit is not correct, it is distributed in this way, where n is the number of variable plaintexts. It is supposed to be big. Now we need to distinguish between two normal distributions. And we use a linear statistic. Fortunately, this linear statistic is a linear combination of a linear statistic for projections with some weights, which are computed explicitly. And we know distribution of the statistic. It is distributed this way. It is one way of normal distribution if the value is correct. And if the value is not correct, it is distributed this way. And we use this fact in cryptanalysis of block cipher. So we can reconstruct the number of values of these key bits from the values of key bits which affect the projection. And this value which we want to reconstruct should satisfy this condition, where s is the statistic we constructed. This is some threshold. We devise an algorithm by working over search tree because we know distribution of this statistic. We can compute success probability as the number of key candidates in the end to be brute-forced. We implemented this attack against full ds. We used 28 10-bit projections. And we computed success probability. We took it 85% as in Matsui's. And the number of key candidates, 56 big candidates to brute-forced was 2 power 41.8. The number of plaintext is 2 power 41.8. It may be compared with Matsui's success probability, 85%. And the number of key candidates to brute-forced is 2 power 43. And the number of plaintext is 2 power 43. You see that it's a little bit better. And another advantage of our method that we can compute success probability theoretically. This is usually a difficult problem for such kind of attacks. This is a basic variant of our method. And we have room for improvements. The work is still in progress. Thank you very much.