 This is John's work with Pierre Lanfouc, okay, so let's base cryptography is an important topic, and so it's one of the few, one of the few type of problem where we have both security guarantees, many efficient construction, and it may be post quantum cryptography too. So in this presentation, I will describe the Bloom-Kalai Wasserman algorithm for binary LWE and the improvements we give, and also the cryptographic application to subset some and then true, and also other applications which are fully in lattice problems. So the binary LWE problem, we are given an unlimited number of samples, where the vectorial part which I call A is uniform, and the error which is E is sampled according to Gaussian, and we are given a scalar product with the same binary vector, and the goal is to find the binary vector which is S. So because we are using binary vectors, the modulus in Q is kind to the precision, so we can divide everything by Q, and this will be simpler for us to work. So for this problem, we know that it's unexpected to have an algorithm with a running time which is below to the end of a log N, but all previous algorithms work in exponential running time. So our results, so first we find a sub exponential algorithm which is, which runs in time to the end of a log log N for most parameters, and we can even have the same complexity for larger secrets. Also N true can be reduced realistically to this problem, and then can be solved with the same running time, and it also works for a subset sum. Each time you have a density which converges to zero, then it will be sub exponential. So the first BKW algorithm, I will start by recalling it. So first being LWE sampled is a somewhat linear property, so the difference of two samples is another sample, but we are adding more noise each time we do that. So what we do is we partition the coordinate into K blocks, and one coordinates that we want to discover with high probability, and so each time we will compute the difference of two samples such that one new block of coordinates will be equal to zero for all generated sample, and if we repeat this K times, then there is only one coordinate of the secret, and if the noise is not too large, then we can discover it. So here is a precise analysis. We can define the bias of a random variable X over R as this, and then we know that the binding of lemma, which is at each step, the bias of the error is squared. And so because our running time will be close to exponential, we know that we can distinguish a bias which is only almost exponentially small, and so if we start with a Gaussian of standard deviation which is alpha times Q, then at the beginning we have a bias which is exponential of minus alpha squared, and we want to end up with almost exponential of minus n, and so if we define beta as the square root of n divided by alpha, we can take the number of blocks which is K as almost two log beta, and therefore each block is of size n over K, and so because we want to have a list of samples where you can subtract and remove n over K coordinates, then the complexity is Q to the n over K. But it was observed before that there is no need to fully reduce each coordinate if the secret is very small, and in particular binary, and so what we only need to have is a small bias for each coordinate. So here what we remark is that having different block sizes is better, and we search the block size such that all coordinates have the same bias, and we also add a new idea to maintain independence in the error so that we can prove a faster algorithm. So we will partition our n coordinates with K values, which are the d i's at the beginning of each block, and we also have quantization coefficients for each block. So here is the algorithm, we start for all input samples, we extract the coordinates in our block, we multiply by the quantization coefficient, and we round this number, and then after you look in the table and if there is nothing then you store the sample, and if there is something then you output the difference, because then the difference between the coordinates of the difference will be smaller than 1 over Q, and you empty the tables so that you have independence in your output samples. So since the coordinate, any coordinate at step i which is reduced is of size 1 over Q i, we can compute the bias as being around exponential of minus 1 over Q i squared, and then at K minus i subsequent subtraction, and so the bias is squared K minus i times, and we want for each coordinate the bias to be around 1 so that the total bias will be around the exponential is small, and so we can compute the Q i as being around 2 to the K minus i divided by 2, and because if we want to produce a large number of samples, then we can compute the length, because the number of possible quantization must be somewhat below the size of the list each time, so we can compute the length as being 2 log m divided by K minus i, and then where m is the size of the list, and then we can deduce m from the parameter, so in this case we have a complexity which is linear in the size of the list or polynomial, and so the final result is that the running time is 2 to the n over 2 divided by log log beta, which is substantial for even small beta. So we have implemented the algorithm for binary LWE with many samples, and so for dimension 128 the previous result was we need time around 2 to the 74, but we run it in 13 hours with a lot much less samples, and it's still even if it's asymptotically fast, in practice lattice reduction is much faster for this kind of parameters. So now we can consider another variant of this problem, which is we have only n samples instead of unlimited, but we have binary noise, and the way we can solve this variant is that we take a small linear combination of samples, and we can generate this way of a large number of samples, and we can prove that it works. Also for the n-true cryptosystem, if we want to recover the key, then we need to find s in this equation, in the ring zx divided by x to the n minus 1q, and we can view this as being a binary LWE with n samples and binary noise, because each time s and e are binary in this setting, and we only know a, and so we can view this as a vector matrix product, and so with a struct on matrix, and if we forget the structures, then it's exactly the problem that we described, so realistically it also works for n-true. Now the subsets on problem, so it's a very old problem. We are given a vector of integers, which are uniform between zero and aim, and we are given the scalar product with some binary vector, and we want to recover this binary vector. So we define the density, which is n over log m, and for small density, it was known that lattice reduction algorithms are efficient, but if the dimension is not too small, then it's more efficient to reduce it to binary LWE with beta, which is 2 to the 1 over d, and so therefore using the previous algorithm, you can solve it in sub-exponential time for small d. And one cryptosystem was based on the subsets on problem, and the density they used was 1 over log n, so this gives a fast algorithm for this problem. So finally we know that LWE is proven hard, so this means that if we solve LWE, then we can solve a lot of lattice problems, and so when you modify the reduction given, we have this problem for lattices, and this is only algorithm known, which can solve them efficiently. So the first one is ability, so we're given a basis of a lattice and a point which is close to the lattice, and in the whole problem we add to find the lattice point, and in this one we also know that the vector coefficient is small, and here you can see that the smallness is expressed as L2 norm, because our algorithm also works for binary LWE, but also if the secret is small in N2 norm, it can be modified. There are also equivalents in this setting of all problems, so the unique SAP problem was there is a short vector in lattice, which is much shorter than everything else, and the goal is to find this short vector, and if we add the condition that it can be expressed as a small linear combination of a known basis, then this is this problem, and we can also solve it efficiently, and finally it's the same for gap SVP, so this problem is to distinguish between having a large vector, well only large vectors, and having one small vector, and we modify it with having a small vector, which is also a linear combination of a known basis, and so for all these problems, the complexity that our algorithm gives is 2 to the n over 2 divided by something which is complicated, but as soon as the large s is much smaller than a beta, then this means that we have a sub-exponential algorithm, and this was not known before. We only had a lattice reduction algorithm to solve these kind of problems, which take time, expansion for now. Okay, so there are a few open problems on this subject, so the first one is is it possible to combine BKW and lattice reduction, because the lattice reduction is very fast in practice, while BKW is not so fast, or at least only asymptotically, and also it's not very useful if the noise is very small compared to the modulus, lattice reduction is much faster, so it would be a great thing to combine the two. Another problem is what if the secret is very small, in particular it can be sparse, and this was used in several crypto systems, and here the best algorithm we know is essentially blood force, so we should be able to do better. Also, do we need a zero to be independent from the vectorial part of the samples? This is required in an analysis, but every time we test in practice then everything is fine, and in particular this would allow to solve the learning with rounding problem with the same complexity than LWE, and we don't know how to prove this yet. And finally a problem which is related to the previous one, it's what's the best way to run BKW with a very small number of samples. We know that taking linear combinations works, but at least for now the best, we need to take a quite large linear combination, and this means that the noise we have in our samples will be very large, and we know in practice it works with much smaller noise, so it would be great to prove things with a much smaller noise. Any question?