 All right. So the second talk will be about faster algorithms for lpn and bin is going to give the talk Yeah, thanks for the introduction. Yeah, the title is a fast algorithm for solving lpn This is a joint work with ninja and the mission while we all come from Chinese Academy of Sciences Our talk consists of the following parts first We give some introduction on the background then we present the preliminaries on the lpn problem And after that we give the short review of the previous algorithm using common codes at Asia Crip 2014 And after that we present all improvements and gives a theoretical analysis And we also introduced the various of the different kinds of bkw algorithm reduction steps used in our solving algorithm and after all these two steps we give a Complete description of our algorithm with the inbending perfect codes and finally we give some congruence We first look at the lpn problem Yeah The learning parity with noise lpn problem less as a core of many crypto Graphic constructions for that weight and post quantum cryptography in the lpn problem There is a secret X of k-bit lance and the other was raised asked to find X given many noise in the products Given as this this is the inner production between X and G where G is the random vector of the same Lance and there's a noise is one with some probability eta deviating from one half and The cryptography schemes based on Appealing both for theoretical and the practical reasons the earliest Proposal did back to HB HB plus HB heaven and also authentication protocols There is also a message encryption scheme based on rpn That is rpnc scheme and some message authentication code smacks using asm pn another notable scheme lpn is based on rpn variant called the rpn also there is recently proposal called Helen with Concrete parameters for different security levels It's very important to study the best possible algorithms that can efficiently solve the rpn problem Here we give a short review of the history on the rpn solving problems The first one is the seminal work of balloon Carl and Was a month called known as the bkw algorithm. It is proposed in general ACM in 2003 and at asian 2006 Laver and Pierre Allan folk they suggest to exploit the fast-wish hardware transform to fast speed up the recovery phases and On to in 2011 on your print Paul Kitchener Suggest to transform the problem into a systematic form and then in Daniel Bernstein and the pie alarm suggests to utilize the Rune structure of the Rune European at RFID security 2013 we know that none of these algorithms managed to break the 18-bit security of lobbying Not the parameters suggest for the 18-bit security of rpnc At Asia Crypto 2014 a new algorithm for solving rpn was presented It was claimed that The 18-bit security of the common 512 and the one-eighth rpn instance can be broken with a capacity of 2279.7 and so do the on previous unbroken parameters of hb valence la pin and the rpnc Recently, there is also a paper by dr. Boogles and the professor of underneath E print propose another algorithm for solving our appearance Here we just make our contribution for this problem We first analogize the problem with Correlation attacks on stream ciphers. That is we regard the bkw reduction as Pre-computation of parity checks and then we regard the solving procedures as the online decoding phases and we develop fast algorithms for solving rpn based on Optimal precise in bending of cascaded concrete perfect cause in a similar work framework as that Asia Crypto 2014 but with many optimizations We first it's the first time to show that We can distinctly break the 18 bit security of the instance suggesting hb Plus hb have an rpnc and a la pin. This is our results with updated parameter choices Now we look at some preliminaries of this problem We first look at the Bernoulli distribution Let this symbol to denote the Bernoulli distribution that is if a random variable follows this Distribution they will know that is it goes to one with probability eta and it equals to zero with probability one minus Inna and as a definition for inner product is the same as before that is In the product between two vectors a row vector and a column vector and the based on these two definitions We have the definition for the rpn problem rpn oracle for unknown random vector of k bit length with a noise Parameter eta in this interval returns independent samples of this form Yes noisy observation of the inner product and the k eta rpn problem consists of a recovering x from the samples output by the oracle It is worth to noting that the problem can be rewritten in the matrix form as here Yeah, here it's a noisy observation of the Inner products and this is a secret that we want to recover and this is a Matrix consists of the columns and this is a nice variable for each Coordination it have this relation and that's a k times n matrix is formed as here each Column is a transpose of the random vector g and we can see from this form that it is close Related to the problem of decoding random linear codes We for then look at the bkw algorithm It's a big edamere is proposed in the spirit of the generalized birthday algorithm It works on the columns g as follows It looks for pairs of columns that they are client on the last bb's and then we accordingly updates the data samples It is easy to see that if we update the samples as this then The 40 noise variable has this distribution according to the planning up limer Formally the bkw algorithm works in two phases the reduction and the solving phase This is a framework of the bkw algorithm We give the input of the matrix g and as I received the noisy Inner products and the parameters were chosen by the attacker Then we put the received vector as the first row in this matrix We form the matrix g0 and then we do t steps of reduction as as this that is we first solve the Matrix last bb's according to the value of the last bb we sort it and we partition it in each partition Then we form pairs of columns in each partition so that they are They can cancel the last bb's. Yeah, we repeat this step 40 40 rounds and then finally we effectively reduce the dimension of the secret x to this one Yeah, it k minus b times t and the our aim is to find a vector x that Has minimized this product. This is a vector and this is a matrix There are two approaches called f1 and f2 to fulfill the mergers procedures sharing the same Assaulting approach with different emerging strategies. It is easy to see that after one works on pairs with a representative Each partition which is discussed at last we have two works on any pairs for each iteration in the reduction phase The noise level is squared. It can be seen here We are the superscript I being the iteration step assume that the noise remain independent at each step We have this relation after bkw reduction according to the panel up line This is a Description of f1 and f2 we can see that we partition the matrix according to the value of its last bb's of each column and For f1 we just pick a representative in each partition and then we form the pairs For other non representative non representatives We do this to ignoring the last bb entries of zero for f2 This part is the same, but they accept here we for each pair in this partition We form a XOR and it turns last b entries of zero Now we look at the previous diagram using current calls This diagram contains five main steps the step one transforms a problem into a systematic form by ghost elimination It's a line to here we use We just choose a random column permutation and perform Gaussian elimination on this Transform the matrix and the resulting this systematic matrix and The step two performs several bkw steps here. This is step two yeah, this Suggest to use f1 in their original form and We directly go to step four. It uses a current code to rearrange the symbols here we just choose a Linear code with the information of airbeat and the co-lens is K2 and the group regroups the columns of the Matrix by the last K2 base according to their nearest code words, and this is a key point for their improvement and The step three they guess is some partial sacred Here They just get some low hamming with secrets and Finally they use the fast wish hard mark transform to find the best candidate and To do perform some hypothesis testing also it means the fast wish transform to determine whether to repeat the algorithm it is this part and This is a framework of the previous algorithm and now we look at each step Respectively for the Gauss elimination The purpose is to change the positions of the secret vector base without changing the associated noise level From the matrix representation. It is easy to develop here From here, we can do this formula where the hat begins with K0s and the G hat From as here it is the systematic matrix from the special form of the first K components of G hat and that's the hat It is clear that we have this distribution for the secret so for the new secrets here So that we need not to exhaustively search all the possibilities of this secret variables we just to try the most The high probability when we try it first the cost of this step is dominated by the competition of this metric the D times G which was reduced to this capacity through some table loocups We are a some fixed value chosen by the attacker and the clear procedures of BKW reduction From this starting point we iteratively process two steps of BKW reduction L0 and the resulting your sequence of matrix RI as here each era has in minus K and the minus I times 2 to B columns We're adopting the R1 type That discuss about 2 to B samples at each step We also needs to update the hat in the same fashion and if we use M to denote this value This procedure has a new form of RPN instance with The systematic metrics and the z prime begin with a k-prime zeros That is what what they down here And we look at the noise or noise of vector e-prime The first k-prime has the same probability distribution as before and for this they have a Distribution have a bounce here according to the panel up limer and as a capacity of this clearing procedure is dominated by this They just use some techniques to reduce to this level with the correct number of queries This algorithm exceeds the security bond of 80 bits You know to in order to obtain a security capacity smaller than 2 to 18 They ever ever to reduction step is actually adapted applied in their calculation So now we look at the partial security guessing we divide x prime into two parts x1 prime x2 prime and accordingly we divide the Metrics g-prime into two parts. We are x1 prime is of K1 bit length and x2 prime is of k2 bit length and the sum of which is k prime This step gets is all the vectors with this concerns that is low hamming weight vectors and The problem then comes to a new form here Yeah, because we guess part of the secret and the complexity of step is determined by updating with this formula and reduce to this amount and The key step is using a covering code. They just select some K2 and air linear code C with carving radius DC to rewrite each column in the G2 prime as code word as some noise as some noise variable Let the systematic generator matrix and a disparity check matrix of C BF and H respectively then the syndrome decoding techniques is applied to select the nearest code world Okay, the capacities cost in calculating in terms in Calculating systems and that the capacity was a recursively computed as here We do not go to details about it's how it replied to reduce to this complexity and they finally have a this form of airplane instance and The bells they probably that they computed as here data's Conditional probability not a normal probability So from the theory of probability This is incorrect and there are some problems in the analysis which explicit in all paper and And the subspace hypothesis testing it's just a fast-paced transform and this transformer has to Repeat this number of times which for each guys of X1 prime using this transform and we Showing of home paper that the data the data company is highly under estimated they use this They use this formula, but it is not correct Then they update it as this Quantity in their presentation at the conference and adult ever to reduction with this number of initial queries And then this is the complexity formula for their attack Now we look at our improvements and analysis We know that the explicit code construction for solving those LPN incentives to support the claims is Text are not provided it is suspicious that whether there will be a good estimation of the bells Well, the assumption of having weight restriction Which is crucial for the exact estimate of the complexity We know that current code is a set of code worse in a space with the probability that every element of the space is Within a fixed distance to some code world Well, in particular perfect cost is a current code of the minimum size This is the definition of a perfect code Perfect just uniformly and the carving and the partition the whole space This has all the known types of binary perfect codes To so far. This is that these are the two trivial ones at the Hemingway Heming code or golly code and a repetition code Then we make some concoction with finite types of binary perfect binary perfect codes and the given fixed parameters K to an L The problem is to explicitly find the configuration of some perfect codes that maximize the bells to solve this problem We divide the metrics into several chunks by those participation and the courage chunk by a certain perfect code We did not go to details here, but Finally we know that way it is reasonable to treat the error with in coming from different perfect codes as Independent variables where the error components in the same perfect code will have correlation to each other then we present the correct method to compute the Probability of this of the balance This is our formula and is given in our paper We give the accurate estimation without any assumption on the Hemingway's and then we tend to the past to search for such Optimal cascaded perfect codes that will maximize the final balance given a fixed K to our prayer according to the instance and we just the number the number of codes with different variables and we Transform it into a integer linear programming problem. These are the details. We did not go into it and these are the linear integer programming. These are the constraints on the Word on the code were less and then these are the constraints on the information balance These are the target function to maximize the bells And then we solo it use some measurement some measurement software my pole and these are our results presented in our paper We found that the optimal cascaded perfect code you already select the 23 12 or golly code and is a repetition codes with R at most At most fall and then we also give a theoretical analysis of the exact data capacity We found that this bottom is a must to satisfy bond and about from experiments This one only provides a susceptibility of a cent about 70 to assure a susceptibility close to one It's better to double it then the previous algorithm at a secret 2014 is not so valid to break the 18-bit security according to this formula and The wearer's of the BKW algorithm We keep the R1 and our R2 is that we prefer our RFK That is for we all I mean to find sufficient the catapults from the previous metrics that add to zero in the last BBs and the for each such catapult we Calculate the sum of the catapult and the disjoint into GI after discarding it at the last BBs We still impose the restriction on this to for the complexity Cut down and they use the generalized birthday problem to find the desirable number of catapults and this is complete description of our rhythm the difference comes from here with everfall and the here we To find the optimal cascaded perfect close that be embedded into the into the vectors of the metrics Here is the same as before and the new algorithm has a similar structure framework as that but with Previous in bending cascaded concurrent perfect close at a step four and we optimize each step with various time memory trade-off techniques We including an additional step to select the favorable curious with the capacity of this amount And then we have derived different theoretical formulas for different kinds of BKW reduction stamps and The whole set of attack parameters according to every one after an effort can be provided Recently, there's a paper on print To question the choice of our attack parameters and after discussion with the authors So we will write a new report to explain how we choose the attack parameters on this problem and If we know we know that if we we all result strictly for this bond if we choose the half Bond then the capacity can reduce further, which is the two to five times lower than before concrete attacks on this four Crypto system break the 18-bit secret bond with some suggest parameter choices Then we look at our conclusions and we propose the first algorithm for solving RPM based on optimal process in bending of cascaded concurrent perfect close and the framework is similar to that at Asia Crypt 2014 but with many time memory trade-off optimizations We introduce some new variants of the BKW algorithm using two posts of collision and the technique to reduce the requirements of the curious Approach is generic and is practical to provide the concurrent security evaluation of the RP and instantaneous future work It's to study how to cut down the limitation of candidates and the employ other types of good cause for example nearly perfect cause Thank you. All right. We have time for some questions All right, then I have a question. Yeah, so if you pick a set of parameters now How much money would you bet that in 20 years time? It still has the same level of security as it has now Sorry, how much how much more progress in solving LPN Do you expect over the next set 20 years because it seems like the advances they're getting smaller and smaller right? So we are in the ballpark of to the 80. Do you think there will be big leaps forward? Or do you think we're close to actually calling? Well, we have some idea of how hard it is to solve lpn Yes, I think for solving lpn it is still not an easy task. It is still very hard to solve it But we know that from the information theory bond The data content is on to it's only 2 to 11 or 2 to 12 But now we have a data content of 2 to 16 or 2 to 17 So there is a large gap between them. So this suggests some improvement space for future work Thank you. Any other questions? Well, then let's thank both speakers of the session again