 Hello everyone, I'm Yuwei. My topic today is double best chance for scalar multiplications on elliptical codes. The outline of my presentation is as follows. First, brief introduction of this work. Second, number of double best chance. Third, the hamming weight of double best chance. Fourth, dynamic programming to generate optimal double best chance. Finally, we discuss scalar multiplication using double best chance. Double best chance are used to speed up scalar multiplications on elliptical codes. Double best chance represents an integer as terms of two integers. The maximum term is called the leading term. And the number of the term is called its hamming weight. Following from Dmitro in board and Mishira's definition, canonical double best chance are the ones with minimal hamming weight. We define optimal double best chance are the ones with the minimal cost of scalar multiplication using double best chance. We present three results of double best chance in this work. First, we display a stretch of the set containing all double best chance and propose an iterative algorithm to compute the number of double best chance for a positive integer. This is the first polynomial time algorithm to compute the number of double best chance. Secondly, we present a simple tautic lower bound on average hamming weight. This result answers an open question about the hamming weight of double best chance. Thirdly, we propose a new algorithm to generate an optimal double best chance. This algorithm accelerates the recording procedure by more than six times. Compared to the state of the artwork, we will discuss the number of double best chance. Why do we count the number of double best chance? Counting the exact number of double best chance is useful to show double best chance is redundant and to generate an optimal double best chance. Each positive integer has at least one double best chance, such as binary representation in BERT and Philippu in 2010, who proposed an elegant algorithm to compute the number of unsigned double best chance for giving integer and presented the first 400 values. To determine the precise number of double best chance for a positive integer is usually hard, but we are convinced that this number is infinite. Dash in 2014 gave an algorithm to calculate the number of double best chance with a leading term dividing 2 to the power of b times 3 to the power of t for a positive integer. His algorithm is very efficient for less than 70-bit integers with a leading term dividing 2 to the power of b times 3 to the power of t for the most b and t. It requires exponential time. Before we give an efficient algorithm to compute the number of double best chance, we first need to show the structure of the set containing all double best chance. Let 5 btn be the set containing all double best chance for non-negative integer with a leading term strictly dividing 2 to the power of b times 3 to the power of t. Let 5 bar btn be the set containing all such double best chance. For non-positive integer, the structure of 5 bt and 5 bar bt are described as follows. 5 bt and 5 bar bt only rely on 5 b minus 1 t, 5 bar b minus 1 t, 5 b t minus 1 and 5 bar b t minus 1. This is the first structure of the set containing all double best chance with a leading term strictly dividing 2 to the power of b times 3 to the power of t in a liter. Based on the structure of the set containing all double best chance, the cardinality of 5 bt and 5 bar bt only rely on the cardinality of 5 b minus 1 t, 5 bar b minus 1 t and so on. An iterative algorithm to compute the number of double best chance is shown in this table. The time complexity of our iterative algorithm to compute the number of double best chance is in big O logarithm in cube bit operations. This is the first polynomial time algorithm to compute the number of double best chance. Using the iterative algorithm, 100 has 2590 double best chance with a leading term dividing 2 to the power of 30 times 3 to the power of 4. And 1000 has more than 28000 double best chance with a leading term dividing 2 to the power of 30 times 3 to the power of 6. These results show that double best chance are redundant. For the Hamming-Wet of double best chance, it's easy to check that every positive in TGN has a double best chance with Hamming-Wet in big O logarithm in such as binary representation dash and a hobby in 2008 posted an open question to decide whether the average Hamming-Wet of double best chance. Produced by the greedy approach is sublinear or not. This question has been remained unsolved for more than 10 years. There are some efforts to investigate the lower bound of double best chance. These results are not enough to solve the lower bound of the Hamming-Wet of double best chance. Why is the lower bound of double best chance not easy? The reason may be that the number of double best chance of a positive integer is infinite. And the leading term of its double best chance may be infinite. The center in Bort and Philippe in 2014 showed that the leading term of a double best lower bound is n over 2. We show that the leading term of an optimum double best chance ranges from n over 2 to 2 times n. The leading term of a canonical double best chance ranges from 16 times n over 21 to 9 times n over 7. And a sympathetic lower bound of the average Hamming-Wet of a canonical double best chance for logarithmic n-bit integers is logarithmic n over 8.25. Canonical double best chance has the lowest Hamming-Wet. Logarithmic n over 8.25 is also the lower bound of a double best chance. This answers Dashi's open question of the average Hamming-Wet of double best chance. This picture shows the Hamming-Wet of a canonical double best chance. The Hamming-Wet divided by logarithmic n is decreased as the integers become larger. It's about 0.18 times logarithmic n for 3,000-bit integers. The value of the Hamming-Wet given for 3,000-bit integers still has a distance from the theoretical lower bound logarithmic n over 8.25. Now, we focus on producing optimal double best chance. Dashi in 2014 presented the first algorithm to produce an optimal double best chance. As Dashi's algorithm requires exponential time in 2015, Kaibune and Iliot generalize the tree approach to produce an optimal double best chance. This is the first polynomial time algorithm to compute an optimal double best chance in 2017. For instance, Triton-Soup and Lang presented a DAG algorithm to produce an optimal double best chance. The algorithm was the state of the art. We will use dynamic programming to generate optimal double best chance. Dynamic programming solves problems by combining the solutions of sub-problems. Optimum sub-structure and overlapping sub-problems are two key characteristics that a problem must have for dynamic programming to be a viable solution technique. The main blueprint of our dynamic programming algorithm to produce an optimal double best chance contains four steps. First, characterize the structure of an optimal solution with two key ingredients, an optimal sub-structure and overlapping sub-problems. Second, recursively define the value of an optimal solution. Third, compute a double best chance with the smallest hamming weight, whose leading term dividing 2 to the power of b times 3 to the power of t in a bottom up fashion. Fourth, construct an optimal double best chance from computed information. The time complexity of our dynamic programming algorithm is in big O logarithm n cube, bit of rations using Bernstein, Triton-Soup and Lang's reduced representative for large numbers. It's in big O logarithm n to the power of 2.5 bit of rations, Bernstein, Triton-Soup and Lang's reduced representative for large numbers. Don't work for some boundary nodes. Our equivalent representatives will solve this problem. The time complexity of our dynamic programming algorithm using equivalent representative is in big O logarithm n to the power of 7 over 3 bit of rations. Using equivalent representative repeatedly is in big O logarithm n square times logarithm n bit of rations. Our dynamic programming algorithm is over 20 times faster than carbonate and ultrase algorithm, and is over 6 times faster than the Bernstein, Triton-Soup and Lang's algorithm. As the integer becomes larger, our dynamic programming algorithm will gain more. When we perform scalar multiplication, we mainly concern three forms, elliptical curves, add-was curve, where stela's curve and diK curve. DiK curve is tripling oriented dash, i-cut, core curve is concentrated by three isogenic and has an efficient point tripling. The cost of mixed point addition, point doubling and point tripling are given by Bernstein and Lang's explicit formulas database. The value of the cost of point tripling over the cost of point doubling are different on these three form elliptical curves. Experimental results show that scalar multiplication using an optimum double base chain is 10% faster than that using non-adjacent form. On the add-was curve, 13% on where stela's curve, 20% on diK curve. The ratio of the improvement of scalar multiplication using an optimum double base chain compared to non-adjacent form is increasing as the value of the cost of point tripling over the cost of point doubling becomes larger. Now we briefly conclude this work. We were concerned with the theoretical aspects of double base chains arising from this study to speed up scalar multiplication and producing an optimum double base chain efficiently. Any questions? Please email me. Thanks for your time. This is all I wanted to share.