 Hello everyone, I'm Xiaoyang Dong from Tsinghua University. I'm talking about key-getting strategies for linear case-schedule algorithms in rectangle attacks. This work is joined with Ding Yueqin, Si Wei Sun, and Xiao Yunwang. This report helped help us the first part include some background and motivations. Boomerang attacks were proposed by the winner. They built the Boomerang distinguisher for ED with two short differentials in E1 and E0. Suppose the probabilities of E1 and E0 are P and Q, the probability of ED will be the square of PQ. As Boomerang attack is adaptive, chosen play attacks and self-tax, several improvements converted it into chosen play-taxed attacks, known as amplified Boomerang and rectangle attacks. The probability is reduced by a factor of 2 to minus 1. Later, many works are proposed to study the probability of Boomerang due to the dependency between the two short differentials, such as the Boomerang switch, sandwich attack, Boomerang connected table, and its improvements. In the aspects of key recovery framework or rectangle attack, we have the following three models. When performing a related key rectangle with linear case-schedule, we find the contests, missed the input differential effort point, how to satisfy the following two equations to be a red-content. However, they may not hold due to the linear case-schedule. We introduce a trade-off in rectangle attack by guessing the full KB and partial KF to get more filters before generating quarters. We build our trade-off model for rectangle attacks with linear case-schedule in algorithm 1. The first step is to collect data construct-wise structures of 2 to RB play-taxed each. For structure, query the 2 to RB play-taxed by incorruption under K1, K2, K3 and K4, store them in different tables. For each of the X-bits, KX, which is part of MB and M-map point K1, the first store a K-quarter, initialize a K-quarter. For each of the MB plus M-map point minus-taxed by K involved in EB and EF we do the following steps. For P1 in L1, we derive its corresponding P2 by incorruption under the correction process. For P3 in L3, we find this partner. It's also by incorruption P3 and O plus alpha and then decrypt to get P4. Then restore P1, P2 in S1, indexed by SF-tax and some internal base. This is SF-tax and this is some internal base. For P3 and P4 in S2, we also compute some internal base X3 and S4 by decryption from SF-tax with some K-bits. In store, we access H to find P1, P2 to generate the counters. For each generated counters, we determine other K-bits and increase the gain counters. We select the top piece in case C and guess the remaining K-bits to check the full K. Totally, we have to guess MB plus M-map and no K-bits. Before generating the counters, we guess MB plus M-map point with K to reduce the memory of K-quarters. We guess X-bit KX before initialize KC. Compute the expected number of right counters with denoted as S. The data complexity is 4Y22RB. We have three time complexity as T1, T2 and T3. We will mainly trade off between those three time complexity. The memory complexity includes the K-counters and data structure. First, we apply our model to Kubernetes on Skinny. Skinny is proposed as a crypto tool. Skinny adopts the 20K framework and SPN function. We have a few automatic tools for differential or boomerang search. Two recent models combine MLP method and CPR side to determine a good boomerang distinguisher. Skinny considers the boomerang distinguisher and K cover face as a uniform automatic tool. Based on previous automatic tool, we build a new model to optimize the configurations for our trade off model. We have to determine configurations for M-map, M-map point, HF, H. The objective function is to minimize T1, T2 and T3. The first step is to model the propagation of cells with known differences in EF. We start from this, this is the end of the distinguisher and propagate from X, 27 to the end. We introduce some inner variables such as DXFIX and DWFIX to mark the known differences in XR or WR. The impact of SR and SC is determined by this equation and the impact of MC is determined by this equation. We have to model the cells that could be used to filter contests in EF. We introduce inner variables as DX filter to denote the cells in XR which can be used to be filtered. For example, cells with fixed difference as input to the SC operation, then we get the unfixed input, then this cell can be used as filter. We also introduce the DW filter to denote the filter in WR. For example, there are three fixed differences in WR and after MC, there are only one cell. When computing MC inverse, we have two extra cells in WR with fixed differences as filters. Before generating quarters, we have to guess partial cells to gain more filters. We define the X guess and the DW guess variables. We know the cells in WR need to know include known cells from XR and filters in WR. Cells in XR plus 1 need to know include known cells from WR and filters in XR plus 1. We also model the advantage H. The objective function is to minimize T1, T2 and T3. We find new numeron distinguishes for several words of scanning. For example, all these distinguishes for scanning 128 to 384, which is 23 wrongs, which is inferred to previous 24-25 wrongs distinguishes. We can perform 32 wrongs carrier coverage type, which gains two more wrongs. We add 4 wrongs EB and fell wrongs EF to the 23 wrong distinguisher. In omega 0 point, we collect the plate structures. The D, B, J, 3-1 act as self-tax. Then RB is 12C and MB is 18C. RF is 16C and MF is 24C. By our automatic tool, we determine the K cells should be guessed in advance as KF point and we get filter as HF. Resides as 1, H as 40 and X as 208 and we get a balance between T1, T2 and T3. The final time is about 22355. We also applied to our model to fork-scanning and doc-sets BC and gate. Our model can also be applied to single case setting and we gave an example application to serpent. We compared those carrier coverage models as follows. Before ordinary encoders in attack 1, we guess MB and MF beats K advance. In attack 2, we do not guess NK in attack 3. We guess only MB in EB and our model and all model guesses MB and partial case. Thanks for your attention.