 The title of the talk is multiple linear characteristics using linear statistics. I'm Jung Geun Lee and this is joint work with Wuhan Gim. We present improved and extended approach of multiple linear characteristics that exploit dominant and statistically independent linear trails. We present threshold-based, rank-based, and combined algorithm 1 and algorithm 2 style or tags. We provide formulas for success probability and advantages in terms of data size, collection of trails, and threshold parameter. Under some hypothesis on statistical independence of wrong key and right key statistics, we then apply the method of full desks exploiting four linear trails. We get a text with complex verudin comparable with existing linear text on desks. We provide strong experimental verification. We start with introduction and preliminaries. Then we present our multiple linear tags. And then we apply our method to desks. And then we consider generalizing our framework to a more general setting where we exploit non-dominant and dependent linear trails. Finally, we conclude. Throughout this work, we consider key alternating iterative block cipher. That comes from a long-key cipher, considering rounds where each round is the concatenation of round-key excursive order with round permutation. By a linear trail gamma, we mean a sequence of linear mass gamma i, and gamma i plus 1 are the initial and final mass for the round permutation fi. By a linear hole, we mean the set of linear trails with the initial mass gamma and final mass gamma prime. In linear cryptocurrencies, we consider several linear collisions. The linear collision of a vector that is a Boolean function with respect to pair-up mass. The linear collision of a linear hole for a given long key. C gamma denotes the key independent linear collision of a trail. By epsilon hat, we mean the under-sampled collision for a data that consists of plain text and separate text pairs. One of the most well-known fact in linear cryptocurrencies is that the linear collision of the linear hole is the sum of all the parity-adjusted linear collisions of trails where this number is called the parity bit determined by the trail lambda and the long key. By this fact, if gamma is a diamond trail, the linear collision of the linear hole is very close to the parity-adjusted linear collision of the trail gamma regardless of the long key. Unless mentioned otherwise, we assume that the gamma and gamma j are dominant text. The data size n is much less than 2 to the n where n is the black size. The collisions and the collision of trails are much greater than 2 to the minus n over 2. The correct key and long key are fixed. And classical masses algorithm 1 is a single diamond trail gamma and tries to recover the parity bit. So, given the sample or data D, we compute the under-sampled collision as one head and determine the parity bit to be zero if and only the under-sampled collision and the collision of the trail has the same sign. All the linear tests require suitable hypothesis regarding the distribution of the right key statistic and long key statistic to theoretical estimate its success probability and attack complexity. The right key hypothesis present for every one is that the under-sampled collision times minus 1 to beta star can regard this random variable with the set of data D vary. And that its probability distribution is very close to the normal distribution with mean the trail collision and various 1 over n. This hypothesis is based on the aforementioned fact on the linear collision of the linear hull and that the binomial distributions are approximate by normal distributions. Under the hypothesis, the success probability of the attack is estimated as phi of square root n times the absolute value of epsilon where phi is the cumulative distribution function of the standard normal distribution. Matheser with the tool tries to add an outraced trail gather for the inner safer. We tried to recover a parity bit and some outrun key bits. So given data D, we use the statistic minus 1 to beta times the under-sampled collision to pick out or rank the quantities for the correct delta bit and the correct outer key. Here, by outer key, we mean the B string obtained by concatenating outer-round key bits involving the outer-round competition of this mass-expressive or. So the mass-expressive or can be expressed as a function of kappa and the plane test and the sample test. And the under-sampled collision comes from kappa and D in this way. And after picking out or ranking quantities, we proceed with trail encryption. The right hypothesis for algorithm 2 is that mass 1 to the correct part bit times the under-sampled correlation has the normal distribution with mean epsilon and variance 1 over n. The one key hypothesis is that the under-sampled correlation for the one key has normal distribution with mean 0 and variance 1 over n. For rank-based attacks, we assume some hypothesis on independence so that the order statistics for the wrong key statistics and the right key statistics are independent. From this hypothesis, we get success probability and advantages. We now consider algorithm 2 style multiple linear attacks. So the setting is given here. We have n diamond statistically independent trails gamma j's. Let epsilon j be the collision of trail gamma j for its j and let epsilon be the square root of the sum of the epsilon j's squares. So the objective of the attack is given beta d, we recover kappa star and beta star, where kappa star is the correct value of the outer kick kappa. There is a bit string obtained by combining kappa j's removing redundancy. Here, kappa j is the bit string obtained by concatenating outer ones qubits involved in the outer one computation for the trail gamma j. We assume for simplicity that bits of kappa j's are either identical or independent. And beta star is the vector considered of correct parity bits. We will use a statistic t that depends on the outer kick kappa and can defer the correct vector of parity bits beta and eta b. So the statistic is expressed as follows. It is the sum of minus 1 to beta j times the collision of j's trail and n times the unassembled collision for the trail gamma j with the applied outer kick kappa j. Now we describe three algorithm-to-style attacks. One is threshold-based, we call algorithm-to-empty. In this threshold-based algorithm, we pick out the pairs that satisfy this threshold condition. In rank-based attacks, we rank kappa patterns according to the value of the statistic. In our combined attack, we pick out candidates satisfying the threshold condition and then rank them. It turns out that this combined method uses better advantage than algorithm to empty their threshold-based for ps close to 1. Algorith-to-style multi-linear attacks We need to consider wrong key types. For j sub o, there is a proper subset of the set of integers from 1 to m. kappa is set at the wrong key type j sub o if the set of indices such that kappa j equals kappa j star is equal to j sub o. We denote by w j sub o, the set of capitals in the wrong key type j sub o. For j sub o and j sub i, kappa beta is set at the wrong key type j sub o, j sub i, if kappa has the wrong key type j sub o and beta has the type j sub i. Here, beta is set at type j, the set of indices such that beta j equals beta j star is equal to j. If beta has the type j, we denote it by beta super j. We denote by w j sub o, j sub i, the set of capitals in the wrong key type j sub o, j sub i. Before proceeding further, we need to consider multivariate normal distributions. So let mu be an intimational real vector and sigma be a positive definite m by m matrix overall. An m-variate random variable x is said to have a normal distribution that means that the mu and covariance matrix sigma if it has the following pdf. This distribution is denoted by this symbol. The probability that an n-variate normal random variable satisfies the linear inequalities can be expressed in a very simple form. We will need this formula repeatedly in this work. In this case, the sigma is the product of the matrix sigma times its transpose and phi in the cdf standard normal distribution. For algorithm-to-style linear tags, we need to consider the distribution of vector-valued random variable for its wrong key type. So we will denote that by x sub j sub o. The hypothesis used for algorithm-to-style linear tag is that this vector-valued random variable has some vector-valued normal distribution. With the mean vector and the signal vector like this here, the point is that this sigma matrix is diagonal. This means that each component's statistics are independent. We will denote this by this symbol. And we consider somewhat stronger hypothesis. So for each case of all, we consider this vector-valued random variable having the distribution determined by m plus u component statistics. Note that the components are right key statistics and wrong key statistics. So the hypothesis here is that this extended vector-valued random variable also has some normal distribution. In algorithm-to-mt, there is a stress-based mu-determined capability that has to be correct if it satisfies certain threshold conditions. The success probability in this case can be computed as the probability of a normal random variable satisfies certain linear inequality and this can be computed very easily. The first one product can be computed similarly. So we need to consider the probability that the wrong keys of type, each type, satisfy the threshold condition. Here, this case symbol denotes the number of bits in kappa. So the first one probability for each type also can be computed as the probability of a normal random variable satisfies the linear inequality. So it can be also computed easily. And the first one product can be computed as some of those functional probabilities and identities look like this. Our rank-based algorithm, we just rank those quantities according to the statistic. The success probability in this case is 1 because it is essentially an exhaustive search. The false random probability in this case can also be computed as the probability that kappa-betas of type is ranked higher than the correct value. Here, the false random probability for each type also can be computed as the probability that a normal random variable satisfies certain linear inequality and can be computed easily. The false random probability can be computed as some of those false random probabilities and the advantage is just this number. In the combined attack, we pick out kappa-betas to find the threshold condition and then rank them according to the statistic. The success probability in this case is the same as in the threshold algorithm. And the false random probability can be computed in a similar way. Here, the false random probability for each type can be computed as the probability that a better normal random variable satisfies two linear inequalities simultaneously. This can be estimated numerically or by assimilation. Here, the false random probability can be computed as some of those numbers and the advantage is this number. We apply our method to this. We exploit four linear trails. The details of the trails are as follows. So, the outer key has 48 bits. We perform our combined attack in the usual manner. We perform experiments using 1,000 keys. For each key, we take the size of data which is 2 to the 42.78. The theoretical experimental success probabilities are as follows. Here, the circles represent the experimental results. Theoretical and experimental advantages are in this figure. We see that the theoretical and experimental results match. In 2004, Birchhoff and others proposed a notable method of multiple linear characteristics. They presented algorithm 1 and algorithm 2 style tags. For each attack, they provide formula for advantages admitted in terms of trail collisions and data capacity. This attack is rank-based and the success probability is fixed to 1. So, but the attacks have limitations that its advantage is not analyzed theoretically for success probability less than 1 and the experimental advantage is not set for 3, for example, and applied to this as shown recently. Later, multi-dimensional linear characteristics has been proposed. It is a very powerful method, but it also has a limitation that it does not yield a tag better than a series order tag on this by several reasons. Recently, several notable linear tags on this has appeared. There are multiple linear characteristics using eight dependent trails, conditional linear characteristics and analysis using a separate statistics. Our tags have comparable complexities as those, but advantages with smaller data size. Note that these attacks are somewhat more efficient than what says all the tags. Now, we would like to explain why our tags are efficient. The linear statistics we used are separable, and the overhead in adding utter rounds is minimized. Almost the same as the old LL statistics of the constant. And in our tag, pathways are recovered at the same time so that automatics is increased a bit. Other methods do not consider recovering them. And the multivariate normal distribution we considered allows to get estimates of the complexity better than using other statistics. Now, we can generalize our method to exploit closely dominant dependent trails. We use modified hypotheses on distributions and multivariate random variables. We present multivariate normal distributions for the different mean vectors and covariance matrices that need to be pre-competed in advance. We perform the same procedure with similar statistics. We use linear statistics with varying coefficients. The social probability and force-run probability can be computed in the same way for each attack. Each probability can be computed as the probability of regions represented by linear squares for multivariate normal random variables. We conclude here. We present a multivariate linear test using multivariate dominant linear trails. We apply the method to process, to exhibit the probability of statistical models and to show the effectiveness of the attack.