 دعو greywera. this is Tegholo okhç as зیمی Welcome to the presentation of the paper A bit vector differential model for the modular addition by constant This is a joint work with Ardrian vania, Mahmoud Sermoziza, C millet, Mohangir evaluation for här Options, Mohammad Ghesa, Aиться Adjustments and Vincent Weyman In this work, we studied a constant addition But where does this operation appear? The answer is the aero X grund Assist set اگه اضافر ع قادری مهموuche او برایکنون خیلی به said다� snakes دوان کار Brentua اضافر مonz Xin پساخد دیگی کنی میم ceremony ب basil ج구ی از دار دو نیام waiting دیشنین ده Ho but you mainly pay attention to block ciphers. We evaluate the security of constant addition against differential crypt analysis, which is a powerful tool to analyze A or X ciphers. Since you may be familiar with the differential crypt analysis, let's briefly look at the concept of the attack. The idea is that the attacker encrypts pairs of X1, X2 and X3 alongside their constant difference alpha, so the attacker gets the distribution of the output difference beta. For a random function, the distribution is almost uniform, but for other functions, some specific betas are more probable. So the differential crypt analysis exploits the differential where the pair alpha and beta propagate with high probability. To show A or X security against differential crypt analysis, one may try to do the standard search for differentials with highest probability and see whether the probability is low enough. But since the search are most of the times tough and impractical, people normally tend to find the longest and most probable differential trails by making some assumptions. Nowadays, the current state of the art is automated tool, such as SMT solver, which works on bit vector theory, which we will discuss later. However, SMT needs bit vector differential model for each operator. A differential model of any operator includes the constraints representing the propagation of the differences through the operation. This can be seen as some bit vector expressions, which is equal to true if and only if the conditions of the differential propagation are satisfied. For linear operations, it is easy to fully and uniquely represent the differential model using bit vector constraints. For example, rotation is a linear operation, and the input and output differences can be mapped by rotation function itself. On the other hand, for non-linear operations, we split the differential model into two parts. First, the validity constraint is true if and only if the probability of the given difference propagation is non-zero. And second, the weight constraint is true if and only if the minus binary logarithm of the probability is equal to the extra input w. So, why do we use the weight constraint, which needs the logarithm instead of the probability itself? The answer is to avoid the multiplication of the probabilities and instead use the addition of their rates. For addition with two variable inputs, we have an efficient bit vector differential model. Let's not focus on the constraints, but pay attention to the basic bit vectors and other o-log and bit vectors. By o-log and, we mean the size of the constraint for the operator is of the order of log n. So, the basic bit vector operators are the operators of size 1, such as x or n addition. Please note that we do not consider the multiplication as a basic operator. Going back to the differential model of two input addition, we see that the model uses some basic bit vector operators as well as o-log and bit vector operators like Hamming rate. Thanks to this model, SMT tools are able to model ARX ciphers using two input additions. But for constant addition, where one of the inputs is a constant, there was no appropriate differential model, which could be used in automated tools. So, the ciphers including constant addition could not be auto analyzed regarding the differential crypt analysis. In this work, we filled the gap. So, the question is, why don't we use the differential model of two input addition for constant addition by choosing the second input difference to be zero? Before we answer that question, let's recall an important notion. We say two functions F and G are CCZ equivalent if there exists a fine mapping like L such that the graph of function F is mapped to the graph of function G. Informally speaking, the CCZ equivalence preserved the differential behavior of the function after mapping. Now, to answer the previous question, we can see that two input addition is CCZ equivalent to a quadratic function, but for the constant addition, there is no such equivalency. Moreover, we tested the 8-bit addition for all possible constant inputs, and we saw that the suggested model fails in terms of the validity as well as the actual probability. Also, modeling the constant addition is much harder than two input addition because the non-zero entries of the DDT of the quadratic function as well as the two input addition is always power of two. So, the weight is always an integer for the case of two input addition. However, for constant addition, it is a challenge to represent the weight using bit vectors since in general it is an irrational number. The only previous work on differential probability of constant addition is this algorithm provided by Machado in 2002. In simple words, it considers the probability of carry propagation for each bit, and it loops over all bits by considering the input and output differences of corresponding bits. And by checking eight conditions as well as having one floating point memory, it updates the probability accordingly. Although the algorithm is efficient, it is not suitable for automated models, since to be used in SMD1 needs to unroll the algorithm, so the model will be at least of linear size, but we seek for luck and size for our differential model. Moreover, this floating point arithmetic does not go very well with bit vector theory. Also, we tend to have weight for the model, not the probability itself. We took Machado algorithm as an inspiration and we got an efficient and appropriate bit vector differential model for constant addition. The construction of our model is long and technical, so here we just provide some key points. For validated art, we used a single carry function to check and verify whether in every position the input and output differences are valid or not. For the weight part, we used known efficient o-lock and bit vector functions, such as hamming weights, reverse of the bit orders and leading zeros. Also, we used the carry functions, which can be created using basic operators. As we mentioned earlier, the binary logarithm is not always an integer, and to work with bit vectors, we need to find a suitable approximation for binary logarithm. To show how we get our approximate logarithm using bit vector operators, let's consider the following example. To find the approximate binary logarithm of x, we first find the most significant one, which is here, and its position determines the integer part of our approximation. Then, we take the remaining bits of x and we truncate the first 4 bits and consider them as the fraction part of the logarithm. And that's it, we have the approximated binary logarithm. We also present some new efficient bit vector operators, which can compute the logarithms in parallel and find the sum of them to obtain the weight. This is done using a specific mass vector, such as m. Then, m has the pattern of all ones followed by single zero. It means that we want to compute the binary logarithm of the corresponding subvectors of x in parallel and add them together. This is the bit vector differential model of constant addition that we obtained. Let's focus on some remarks of the constraints. First, as we can see, the differential model of constant addition has more constraints than 2 input addition. Moreover, the validity part is quite simple. It only uses basic operators and a single carry function. So, its bit vector complexity is 01. The weight part is more complex. It uses some bit vector operators of size o like n, such as hamming weight, leading zeros, reverse, and 2 new o like n bit vector operators, which are called PL and PT, alongside many basic operators. So, the bit vector complexity of this part is o like n. We use an approximation for binary logarithm. So, it is inevitable to see some errors in our weight part. And we need to carefully study the error bounds. To do so, first we study the approximation error when there is no truncation. As we can see, the error is bounded by small numbers, and we could easily compute them. Next, we bound the error when we have truncation, and we found that dedicating 4 or more bits for the truncation will always result on the same error bounds. So, for the sake of efficiency, we chose 4 bits for the fraction part of our approximation. And we see the same bounds for the total weight error, as if we would not truncate at all. Let's remark again that representing the weight of the constant addition using fixed binary vectors will always result in error, since the weight is an irrational number in this case. Now, let's see how we can use the model in SMT to obtain differential first. That was all for the first part of this presentation, and I, Adrián and Rania, will present the second part of this work. Apart from the differential model that we described before, we are so sure in this paper a method to use SMT solvers to search for characteristic of ciphers, including the constant addition. A SMT solver can solve decision problems, yes and no questions, but we want to solve a search problem. We want to search for a characteristic with the highest probability, equivalently with the lowest weight. So, we need to translate this search problem into a sequence of decision problems, and we do as follows. We start with initial weight 0, then we encode the decision problem of whether there exists a characteristic with integer weight 0, meaning with weight between 0 and 1. We feed this problem to an SMT solver, and if the SMT solver finds this problem unsatisfiable, we ask the SMT solver for an assignment of the variables that makes this problem satisfiable, and from this assignment, we recover the characteristic. Otherwise, if the SMT solver finds this problem unsatisfiable, we increase the weight by 1 and we repeat the process. We encode the SMT problem of whether there exists a characteristic with integer 1, and we continue until we find a problem that is unsatisfiable. To speed up this search, we first search for a characteristic covering a small number of rounds, and then we increase the number of rounds and we reuse the initial weight as the weight of the characteristic that we found before. Moreover, under the standard assumptions of key independence and wrong independence, the characteristics that are found by this method are optimal, meaning with minimal weight. The most complex task of this method is how to encode efficiently the SMT problems, and I will explain how we do it with an example. Assume that we have the cipher on the left and we want to write down the SMT problem of whether there exists a characteristic with a tire weight. So first, we define a symbolic variable delta p that represents the input difference. And in order to propagate this input difference through the first operation, we define another symbolic variable delta x denoting the output difference of this first constant addition, and we append to the SMT problem the differential model that describes the propagation of the input difference through the constant addition. We also define a symbolic variable w1 that denotes the weight of this propagation. In a similar way, we propagate delta x through the xor of the wrong key k. To do so, we just include the differential model that represents how difference propagate through the xor. In this case, it's a linear operation, it's very easy, and we continue propagating the symbolic variables denoting the intermediate difference. We propagate delta y to delta z and delta z to delta q. Because the last operation is also a nonlinear operation, we also define another symbolic variable w2 that denotes the weight of this last propagation. And finally, we include a final constraint that ensures that the sum of the weight of each nonlinear operation is equal to the target weight that we consider, one for each SMT problem. In the end, we obtain a logic formula with an existential quantifier. So this formula, this problem, it is true or false, and we can fit this into an SMT solver, and you will find out whether this problem is satisfiable or not, and if it is satisfiable, we can ask for an assignment of this variable, delta p, delta x, up to w2, and we can recover the characteristic and the weight from this assignment. The main drawback when using SMT solvers is that the language to implement them is not very useful friendly, something between C and assembly, so it requires a high effort to implement this problem. That's why we provide a Python tool, ARXpy, that fully automates the search of ARX characteristic, or tool implements this differential model, and also previous differential model, it implements this method that is based on SMT solvers, and it provides, implements many optimization to make this search efficient. The workflow of this tool is as follows. First, the users need to implement the ARX ciphers in Python following the interface provided by ARXpy, and the user need to select the search parameters, like the type of characteristic to search, or the SMT solver to use, and then the tool will do the rest. First, it will translate the Python implementation of a cipher to single static assignment form that is easier to manipulate, it will encode and create the SMT problems, it will communicate with the SMT solver to find the problems that are satisfiable, and it will verify the characteristic that are obtained by sampling many plaintexts and many keys, and in the end providing the results to the user. This tool, ARXpy, it's fully open source, you can find it on GitHub, and we also include, provide a complete documentation, so this can be useful for the community. We applied this model and this tool to search for some characteristic of ciphers, including the constant addition. Unfortunately, the constant addition has mainly been used in the key schedules of the ciphers, and the reason of that is that up to now, it was very difficult to search for characteristic of ciphers, including the constant addition, so designers avoided using the constant addition in the wrong function of the cipher, so they could easily argue differential characteristics. That's why we instead search for red key characteristics, meaning for a pair of characteristics, one characteristic goes over the key schedule, and the other one goes over the encryption part, and it reuses the wrong key differences from the first characteristic, and the target ciphers that we consider are Xthia, Thea, Hyde, and Lea. The standard assumptions of key independence and wrong dependence do not hold for the red key characteristic of this cipher, that's why we verify each characteristic empirically. So after the sentient solver finds a problem satisfiable, we split this characteristic into smaller ones, and we check each smaller one with 220 pairs for 210 keys, and if there is a small characteristic that has zero probability for all the keys, we discard the whole characteristic and query the sentient solver for another one, and the results that we obtain, we use Erxpi with the sentient solver boolector, which has won many awards by solving snd probability in the bit vector theory. These are the results that we obtain, they make a u through this table, in the third column we provide the pair of weights for the characteristic over the key schedule, the first weight is the theoretical weight, computed sum in the weight of each non-operational, and the second weight is the empirical weight computed in the verification process when we sample many pentics and many keys. And in the fourth column, we provide a pair of weights in a similar way as in the third column. And in the fifth column, we provide the fraction of the keys out of these two to ten keys used in the verification process that lead to non-zero, that lead to characteristic with non-zero probabilities. So, for example, for TIA, we obtained a characteristic over the whole cypher with weight zero, meaning probability one. This characteristic was previously obtained with manual methods and we recovered this characteristic in a purely automated way. For XTIA, we obtained characteristic covering more rounds and with better probability with lower weights. For LIA, we could search up to seven rounds because we aim for optimal characteristic, characteristic with low weights, previous results, and could obtain characteristic with very low probability up to eleven rounds. A similar behavior that in the single key case. For LIA in the single key case, one can search for optimal characteristic up to six rounds and one can search for non-optimal characteristic up to eleven or twelve rounds. And finally, for Hyde, we obtained characteristic covering more rounds with better probability similar as in the case of XTIA. That is all for this presentation. Thank you very much for your time.