 Thank you very much for listening to my talk. I'm Xinxing Gong, and I'd like to talk about our paper, comparing large-unit and bitwise linear approximations of Snow-2 and Snow-3G. In this paper, we reconsider the relation between the large-unit linear approximation and the small-unit and bitwise ones derived from the large-unit one, showing that approximations on large-unit alphabets have advantages over all the smaller-unit or bitwise ones in linear attacks. For Snow-2 and Snow-3G, we found many concrete examples of 8-bit linear approximations, including the best ones, whose certain one-dimensional or bitwise linear approximations have almost the same SEI as that of the original 8-bit ones. That is, each of these bitwise approximations is dominated by a single bitwise approximation, and thus the whole SEI is not essentially larger than the SEI, the dominating single bitwise approximation. Since correlation attacks can be more efficiently implemented using bitwise approximations rather than large-unit ones, improvements over the large-unit linear approximation attacks are possible for Snow-2 and Snow-3G. For Snow-3G, we improve the faster correlation attack by using our new-found bitwise linear mask yielding high correlations. My talk will include these five parts. In the first part, I'd like to talk about the motivation of our research. Snow-2 and Snow-3G are both members of the Snow family stream ciphers using the classic RFSR-FSM structure. Linear attacks have been widely used to analyze Snow-2 and Snow-3G, most of which are based on the bitwise linear approximations. At Crypto 2015, Young et al. improved the faster correlation attack on Snow-2 by building the two-round bitwise linear approximations for FSM. Inspired by this work, at FSE 2020, Young et al. constructed the three-round bitwise linear approximations for FSM of Snow-3G and launched a faster correlation attack. These results give the impression that large unit approximations lead to larger SEI and also to better attacks. So the question is, how do the large unit and the smaller unit or bitwise linear approximations work for Snow-2 and Snow-3G? Before describing the main work we've done, I will introduce some concepts used in this paper. Definition 1 describes the correlation of a Boolean function F. Correlation is often used to evaluate the efficiency of bitwise linear approximations in linear attacks. Definition 2 describes the correlation of an EM vectorial Boolean function and any given input and output mask. For an EM vectorial Boolean function, with the definition of the probability distribution, the SEI is defined as definition 3. This SEI measures the distance between the target distribution and the uniform distribution. Especially for M equals 1, the SEI of F is equal to the squared correlation of F. For an EM function F, we define FV for each n-bit non-zero V. Such that FVx equals the inner product of V and Fx. Then F can be viewed as an n-bit large unit linear approximation and FV is a bitwise linear approximation derived from F. There is a fundamental factor bound to the SEI of a distribution. That is, the SEI of a large unit linear approximation is the sum of the squares of all non-zero bitwise linear approximations that are included in the large unit approximation, as shown in Lemma 1. For the SEI of the probability distribution dF, we will adopt the simplified notation dF to denote dF the after. From this factor, we derive directly the relation between a large unit approximation and the smaller unit of bitwise ones derived from the large unit one, as shown in property 1 and property 2. Property 1 shows the relation between different size linear approximations and property 2 shows the relation between the large unit linear approximation and any bitwise ones derived from the large unit one, suggesting that approximations on large unit alphabets lead to larger SEI. Since the data complexity in a linear attack is proportional to the value of y of dF, property 1 and 2 seem to suggest that the larger the unit, the better complexity result we can get in a linear attack. Next, we will introduce the work we've done. The first is about the large unit and bitwise linear approximations of the FSM of Snow 2. In this part, we will study the bitwise and bitwise linear approximations respectively. This finger shows the key stream generation phase of Snow 2. Snow 2 is with the LFSR and FSM structure. The FSM consists of two 32-bit registers, namely R1 and R2. The FSM updating and the key stream output are as shown here, where the S function is a 32-bit to 32-bit mapping composed of four parallel AES S boxes denoted by SR, followed by the AES mixed column operation denoted by R1. We first recap on the previous bitwise linear approximations for two-round FSM of Snow 2. For two-round FSM, the output bit can be expressed as a function of internal database with the variables and the F function defined described here. Apply the masques gamma and lambda to two consecutive key stream rows. The bitwise linear approximations have the following form. For this approximation, the best three master tubers of gamma-lambda are listed in this table. Actually, we've done more experiments recently by taking different masques for the key stream rows and LFSR states involved, opting more bitwise linear masques, yielding high correlations. Next, we recap on the bitwise approximation for FSM. The general method is to apply the four-byte masques P and N to two consecutive key stream rows by using multiplications over the AES mixed column field. And then cancel out the nonlinear contributions from the rigid test by composing the whole noise into two sub noises. Accordingly, the bitwise linear approximations for FSM are obtained as follows. To obtain the SEI of the bitwise linear approximation on any given TN master, we need to compute the SEI of N1 and N2 respectively. Our contribution here is that we provide two slightly improved algorithms for SEI computations of N1 and N2. With these algorithms, the SEI of the whole noise can be derived by the convolution of that of N1 and N2. With these two algorithms, we've carried a wide range of search for bitwise masques. One important observation from our experiments is that the best bitwise master tuple given by John is not optimal. And we found two more bitwise masques, which give large SEI, as shown in Table 2. Table 1 and Table 2 list the best 3 bitwise and bitwise masques for the FSM of Snowtu. We let FTN denote the bitwise linear approximation under the 4-byte masque tuple TN. And F-gamma lambda denotes the bitwise linear approximation with the 32-master tuple gamma lambda. For each bitwise and bitwise linear masques with the same number, we've verified that the first coordinate of the bitwise masque is exactly the bitwise masque, and they have the same SEI. As shown here, that is, each of these bitwise linear approximations is dominated by a single bitwise approximation. In our experiments, there are many complete examples of bitwise linear approximations whose certain one-dimensional bitwise approximations have almost the same SEI as that of the original large unit ones. We know that correlation attack can be more efficiently implemented by using bitwise approximations. Improvement of the bitwise attack is possible for Snowtu. Hopefully, a bitwise fast correlation attack on Snowtu has been mounted by using multiple bitwise masques as listed in Table 2. Next, we describe the bitwise and bitwise linear approximations of FSM of Snowtu AG. Snowtu AG differs from Snowtu by introducing a third 32-bit register to the FSM and a corresponding transformation for updating this register. That's S2 function. The FSM updating and the KStream multiple are as shown here. The S2 function is a 32-to-32 mapping composed of four parallel 8-bit to 8-bit substitutions followed by the AS next column operation. We first describe bitwise linear approximations for 3-round FSM of Snowtu AG. Similarly, for 3-round FSM, the output bits can be expressed as a function of internal data bits with the variables under the F function described here. Generally, we consider to apply the linear masques by gamma and lambda to the KStream words at three consecutive time instances respectively. And then, cancel out the nonlinear contributions by decomposing the whole noise into four sub-noises. The four sub-noises are E1 to E4. Accordingly, the bitwise linear approximations have the following form. And the correlation on any given masques by gamma lambda is obtained according to Pin-up Lemma. What we should do is to find the phi gamma lambda. That's that the corresponding correlation is as large as possible. Then we need to compute the collisions of four sub-noises for given masques. That is, we need to compute the collisions of the noses E1, E2, E3 and E4. First, for the computation of the correlation of E1 and E2. Note that E1 and E2 have the same form, but different input and output linear masques. From the expressions, a certain type of function is derived, denoted by G. The literature GD20 has provided a constant time algorithm for computing the correlation of G on any given masque. The general idea is to divide the 32-bit values into four 8-bit values according to the specific structure of S-box. And then pre-compute and store some useful matrices independent of the input and output masses. And finally compute the correlation on the any given masques by doing such matrix manipulations by using the pre-computed matrices. Next, for the computation of the correlations of E3 and E4. Note that E3 is closely related with the module addition with three inputs, which can be accurately computed in constant time by the method in the literature NW06 by doing 32 matrix multiplications of small size. We will skip the details. As for the noise E4, the correlation can be obtained through four LAT lookups, which is, of course, a constant time procedure. With these constant time algorithms for computing the correlations of four sub noises, we can carry out a wide range of search for four fair gamma lambda, which yield high correlations. In this paper, we use a search strategy attempting to find some potential linear masques based on some correlations of observations. We will skip the details, but give the search results. Following the general procedure of the fast correlation attack, we propose an attack with a linear mask given in the table. Improve largely the previous results based on bytewise linear approximation. Next, we focus on the bytewise linear approximations. The general procedure for building the bytewise approximations are similar with that for building the bitwise ones. First, apply the 4-byte mask QTN with three consecutive key streamers by using multiplications over the AES mixed column field. And then cancel out the nonlinear contributions by decomposing the whole noise into four sub noises. All these noises are 8-bit variables. The SEI computations are mostly the same with that in SNOTO case. We will sketch some ideas on how to compute the above-noise distributions. This table shows the best bytewise mass tuples. We obtain the fourth-three-round FSM of no 3G. Table 3 and table 4 list the best three bytewise and bitwise masks for the north 3G. Similarly with the SNOTO case, for each bytewise and bitwise linear mask with the same number, we've verified that the first coordinate of the bytewise mask is exactly the bitwise mask. And the SEI of the bytewise mask is almost equal to the squared correlation of the corresponding coordinating bitwise mask. That is, each of these bytewise linear approximations is dominated by a single bitwise approximation. There are many such cases in our experiments. These correlation attacks can be more efficiently implemented using bitwise approximations rather than bytewise ones. Improvements over the bytewise linear approximation attacks are achieved. In this paper, we compare the bytewise and bitwise linear approximations of SNOTO and SNOT3G and find many concrete examples of bytewise linear approximations whose certain bitwise linear approximations have almost the same SEI as that of the original 8-bit ones. That is, each of these bytewise approximations is dominated by a single bitwise approximation. Based on our newly found bitwise masks, we propose a bitwise fast correlation attack on SNOT3G. We lightly improve the previous attack based on bytewise linear approximations. That's all of my presentation. Thanks for your listening.