 Hi, my name is Kenji Asada. This talk is about a new framework of bit security. This is a joint work with Shun Watanabe. In this talk, we want to consider what is bit security? It is in some sense a well-established measure of contributing to the security level of cryptographic primitives. We say primitive P has K bit security if every adversary needs two-to-two K operations to break P. So the question is how can we define bit security? As an example, let's consider the case of one-year function. Let F be our function. We say an adversary breaks, adversary A breaks one-year of F if even a sample F of X, A up to the string Y, satisfy F of X is equal to F of Y. We want to consider the computational cost needed to break one-yearness. We have two simple solutions. The first one is the brute force search. For every string Y, the algorithm checks if F of X is equal to F of Y until it finds a solution. The second one is random guessing. For random string Y, the algorithm checks if F of X is equal to F of Y. The algorithm iterates the procedure until it finds a solution. In either case, the algorithm needs order of two-to-two N iterations to find a solution. So the total computational cost is order T F times two-to-two N, where T F is a cost for evaluating the function F. We may have another solution. Namely, there is some good algorithm A with computational complexity T such that A breaks one-yearness with probability option. In this case, let's consider what if we invoke the algorithm in total N times. The probability that some algorithm breaks one-yearness will be amplified to option N. Since it is sufficient to choose N is equal to one over option. So the total computational cost of order N times T is equal to order T over option. We have seen three solutions to estimate the cost of breaking one-yearness. We noticed that the cost of order T over option is consistent in all solutions. In the brute force search, the cost is T F times two-to-two N and option is equal to one. When we use random guessing cost is just T F and the option is equal to two-to-two minus N. So based on this observation, the bit security should be defined as the minimum value of the low base two of T over option. And this way of defining bit security can be extended to other search type primitives such as signature schemes and the message of this course. As also the search type assumptions such as factoring, the problem and the CDH assumption. So the question we want to ask in this work is how to define bit security of decision type primitives and assumptions such as two random generators, action schemes and deleted assumption. In decision game, the adversaries winning probability is designed to be close to one half. So we usually define the advantage of the adversary as two times the absolute value of the winning probability minus one half. So we want to know this advantage is the light measure for evaluating bit security. And just to work, we introduce a new framework for defining bit security. It is defined for security gains and we apply the same operational meaning for search and decision gains. The interpretation is that game G has K bit security if every attacker needs computational cost up to this K for winning the game is high probability. This we consider that two types of search and decision games should be structurally defined. We define the winning control for the two types to find it. And as an answer to the second question in the previous slide, we show that in our framework, the learning advantage is the right measure for evaluating bit security or decision gains. And we show several natural reduction of bit security between security gains. And finally, we compare our framework with the one proposed by Incheon and Walter in 2018. Now we describe our framework. There are two adversaries, inner adversary and outer adversary. The inner adversary plays a user security game G which is an interaction between adversary and the challenger. We assume that for every game, a challenger chooses a random secret U of length N. For such gains, we usually require that the adversary's winning probability should be close to zero. For decision gains, the secret is just a bit. And the winning condition is to predict the big U. In this case, the winning probability is designed to be close to one half. And the task of the outer adversaries to invoke local game G many times to amplify the winning probability. Next, we will define the winning condition for the outer adversary. First, consider the case of the search game. The outer adversary collects information from the inner adversaries. Here, we assume that each inner adversary plays an independent game with fresh randomness UI. So when you commission of the outer adversary is that, there is some inner adversary who wins the local game. So the task of the outer adversaries to invoke inner adversaries sufficiently many times so that some inner adversary wins the local game. Next, consider the case of the decision game. We assume that each inner adversary plays an independent game with consistent secret, secret bit U. Namely, a secret bit U is initially sampled and the same secret is used in each local game. After collecting information from the inner adversaries, the outer adversary finally outputs his prediction U prime. The winning condition is that the prediction is equal to U. So the task of the outer adversary is to invoke inner adversaries many times until he can collect sufficient information to predict U. In our framework, the security is defined as the minimum value of the log best U of N times T. Where N is the number of invocations by the outer adversary and T is the computational cost of conducting the local game. The minimum is taken over all inner and outer adversaries with the restriction that the outer adversary wins the game with gravity at least one minus mu. Where mu is some small constant called the error probability. This formulation means that the best security is the log of the total computational cost needed to achieve the winning probability at least one minus mu. We have several implications. First, for such games, the best security must take a finite value. The reason is that if the output length of the inner adversary is M, a random guessing adversary can win the game with probability at least two to the minus M. So the total cost of two to the M is sufficient to win the game with high probability. So the best security is at most M. In contrast, addition game may have infinite which security. We can understand it by considering the one type of encryption or perfectly secure encryption schemes. Since no adversary can increase the advantage in this innovative game, we cannot amplify the winning probability to one minus mu. So the best security is infinite. Finally, we observed that in the addition game, the outer adversary corrects samples from inner adversaries to distinguish the two cases. This is the task called binary hypothesis testing in information theory or statistics. So we can use the existing knowledge from the literature to characterize this task. We characterize our big security in the following seven. For any security game G, the big security is called to the minimum value of the log best two of T over the advantage of the inner adversary. Namely, we can exclude the outer adversary and the big security can be evaluated by the inner adversary. Where the advantage is defined as follows. For such games, it is equal to the winning probability of the adversary. We can easily understand. For addition games, the advantage is called the learning advantage which is equal to the learning divergence of order one half between two distributions, a zero and a one. Where a U is the absolute distribution of the inner adversary and there's a condition that you was chosen as a secret. We investigate the behavior of the learning advantage by comparing it with the conventional advantage. Suppose that the winning probability of the inner adversary is equal to one plus the option over two. Then the conventional advantage is called to option. The learning advantage is given as in the previous slide. And we show that for any addition game, the learning advantage is bounded to below by the option squared and is bounded above by the option. Also we show that the learning advantage is equal to the lower bound for balanced adversaries. Where we say an adversary is balanced if it outputs any value with at least constant probability. Using this proposition, we can resolve a peculiar problem of linear test for pseudonymous generators. Let's look at this problem. Consider pseudonymous generator G with seed length N. It is known that for any pseudonymous generators, there exists a linear test T that achieves the conventional advantage of two to the minus N over two. Since non-trivial linear tests are to zero and one with equal probability, they are balanced. Thus the learning advantage will be two to the minus N. From the previous proposition. So if the bit security is equal to log base two or T over the conventional advantage, the bit security must be at the most N over two. However, it is counterintuitive because it seems unnatural that N bit seed pseudonymous generator must have bit security at the most N over two. In our framework, since the bit security is characterized by the learning advantage, it is possible to achieve N bit security. And be note that the mid-change underwater also resolves this problem by the framework in a different way. In our framework, we show several natural bit security reductions. K bit secure pseudonymous generator implies the K bit secure one function. And K bit secure in the CPA action schemes implies the K bit secure one CPA action scheme. Also the K bit secure digital assumption implies K bit secure CDS assumption. And regarding the Gold-Rahid-Webian theorem, we show that K bit secure one function implies the K bit secure hardcore predicate for balanced adversaries. However, proving the general case remains odd. For the distribution approximation program, we show a natural composition of bit security. Suppose that game G employing distribution Q has K bit security. And suppose that two distributions P and Q are K bit secure in this scheme show. Then we show that game G employing distribution P instead of Q is K bit secure. And this relation holds both in search and decision gains. We briefly see the bit security framework proposed by Richard Waldo. Then you find a bit security as a log best of T over some advantage introduced in their paper. It is defined using the mutual information and the channel entropy of random variables X and Y where X is the random secret of the game as in our framework. And Y is defined as close. It is equal to the fairer symbol both if the adversaries are both. And it is equal to X if the adversaries wins the game. And in other cases, Y is chosen informally at random and the function that it is not equal to X. This advantage captures some correlations between X and Y. However, it is difficult to understand what it means in this form. After introducing the advantage, they show that that advantage can be approximated as force. For such games, as in our framework, it is the winning probability of the adversary. For decision gains, it is equal to alpha A times the square of two beta A minus one where alpha A is the probability that A adds the value as a symbol. And A is the conditional probability that A wins the game G and the function that A adds the value as a symbol. By this characterization, we can see that if the conventional advantage is in decision game, it's at the most two to this minus care over two for every adversary. Then the game has KB2 security. Also in their framework, the classical Golovkin-Levin theorem is a tied reduction. Namely, KB2 secure one function implies KB2 secure hardware predictor. Compared to their results, the difference from our framework as follows. First, our notion has an operational meaning. Second, if the conventional advantage at most two to this minus care over two, then it does not imply that it has KB2 security. In our framework, the big security lies between K over two and K. Also proving the tightness of the Golovkin-Levin theorem to prove it, we need to improve reduction always. We conclude our talk. We introduce a security framework with operational meaning. The interpretation is that game G has KB2 security if every attacker needs a computation cost of two to this K for within the game with high probability. We show that in our framework, the advantage is a large measure of operating which security. A possible future work is to give a tight reduction of the Golovkin-Levin theorem. Since we have several frameworks to evaluate big security, people may wonder what notion should be employed. So it may be official to discuss the measure or setting the big security notion. For example, bidding some actions for big security may be possible. That's all. Thank you.