 So let's start right off with the definition. The LPN problem means learning parity with noise and it's a mathematical problem that's stated as follows. So imagine that you have an oracle that gives you samples of the form a random vector AI and the scalar product of this random vector AI with a secret vector S plus an error bit which is one with probability tau where tau should be smaller than one half. And yeah, the goal now is to find the secret vector S of course. And now you can draw as many samples as you want from this oracle and if you drew a fixed amount of samples as many as you want, you can write all the AIs in the columns of a matrix, capital A, oops, where's the, oh yeah, here in this matrix AI and the small error bits AI can go to this E and it's equivalent to say, yeah, solve this equation AS plus E equal to B for S with the known A and B. And solving this problem is easy whenever we know E because if you look at this equation here, you can just solve it for S. And since you know A and B already, if you also know E, yeah, getting S is easy. And this is for example the case when the noise rate, so the tau is zero because then the error vector will be always zero and we can solve for S. Or this is also the case when it's not zero but if tau is very small, something like one over K here because then the E will be some unit vector so we can just brute force all the K unit vectors and at some point we will also find the correct S in at most K steps. And now it's a common belief that solving this problem for bigger parameters of tau, like essentially larger than one over K, is not doable in polynomial time anymore. So these two cases are doable in polynomial time but for larger noise rates of tau, it isn't especially for constant tau. Okay. And so what can we do with the LPN problem? Well, we can for example create authentication protocols like the HB protocol and this uses a constant noise rate of tau, something like one over four or one over eight and we can also do encryption with this. For example, the one published by Alekhnovich which uses a diminishing noise rate of one over square root of tau, square root of K. Okay, and how can we solve this problem? Well, the most famous and the state of the art algorithm for this is the BKW algorithm created by Blum, Kalai and Wasserman and it needs time, memory and cyber complexity two to the K over log K. So it's slightly sub exponential and for constant tau, this is also the fastest algorithm known at the moment but it has several drawbacks. For example, the first one is even if you plug in this very small noise here, one over K, we have seen that the problem should be easy in this case but the BKW algorithm still needs slightly sub exponential time instead of polynomial time. So that shows that the algorithm has a very bad dependency on tau and that you shouldn't just blindly use this algorithm for any given LPN instance. Okay, the second drawback is the high sample complexity. It's also slightly sub exponential and that prevents us from creating quantum algorithms of this version because the LPN oracle that gives us all the samples is a classical one and if the algorithm needs two to the K over log K samples already, then also quantum tricks don't help us in this case. And that's dissatisfying because the LPN problem is a candidate to build post-quantum secure crypto on and if you don't have a quantum algorithm, so we also went to consider quantum algorithms, the fastest quantum algorithms to solve the LPN problem which is not possible here and the last drawback that we mentioned here is the huge memory consumption. And that's the same as the other two, slightly sub exponential and that means that we essentially can't solve any large LPN instance with this algorithm. Yeah, practical experiments only exists for small K like around 100 but the most interesting instance starts with let's say 512. So there's a huge gap and that's bad because if we want to estimate the concrete hardness of an LPN instance, actually we want to extrapolate from experiments but if we only have experiments with very small parameters, then the extrapolations become very inaccurate. Okay, so now we talked a lot about the BKW, let's also check how it works kind of. Well, so here's the oracle, this always gives us the LPN samples and at first we get a lot of samples of two to the K over log K. So in this matrix are all the AIs in every row. So it's a random matrix because the AI vectors are random. And now we try to combine vectors in such a way that we can create some zeros in the last coordinates, let's say. And we will lose some samples on the way, but that's okay. And we just continue this until at the end, we end up with LPN samples where the random vector AI, it's not the random vector anymore, but let's say the first unit vector. Because what does it do? Well, if we have LPN samples where the AI is unit vector, the first unit vector, let's say. Then in the second component of the sample, here the scalar product of AI and S will be just the first bit of the secret that we search plus the error bit. But most of the time this error bit will be zero anyway. So by doing a majority vote, we can read the first bit of the secret from the second component of the samples. And yeah, we have seen the running time before of this algorithm, but if you do a more careful analysis, you can see that you can also put the tau in the running time. And here you see that if you plug in, for example, tau equal to one over K, that the running time will be just two to the, that in the exponent, one half K over log K. So not polynomial. And to tackle these drawbacks that the BKW algorithm has, let's look at another very easy and well-known algorithm that we call Gauss now. And it just does the following. Yeah, it's very easy. Just gets K samples from the oracle. Merci, we have a square matrix A. And let's assume that no error happened. Yeah, that all the error bits E, I, I zero. Then you would solve in this line for the correct S and the algorithm would end. And it would terminate. But usually errors, of course, will happen. If you draw K samples, the probability is high that some error happened. So we have to repeat this process until you find the correct S. And maybe some of you will wonder how you can now check if you found the correct S. This is the correct S that you are searching for. But there's a very simple and efficient statistical hypothesis test to check if we have the correct secret. Okay, and as I said, the algorithm terminates whenever we really draw K samples without any error. Yeah, where all the error bits E, I are zero. And this happens with probability one minus tau to the K because one bit is zero with probability one minus tau and we want all bits, all K bits to be zero. So, and so this algorithm has running time one over this probability than expected. And we get the following theorem that the LPN problem can be solved in the time that we have just seen. Yeah, one minus tau to the minus K. And yeah, time N samples and polynomial memory. Yeah, here we knew this soft O notation which suppresses polynomial factors. Yeah, the memory is actually something like K squared, O of K squared. And if you do the math, you can see that for diminishing tau, yeah, something like one over square root of K, you can write the running time in a nicer way like E to the tau K. And here you see that it has a very good dependency on tau because if you plug in tau equal to one over K, for example, you see that the running time becomes polynomial. Now you have O to the soft O of one, so polynomial, that's good. And we've also seen that the memory is also just polynomial, that's also good. But one problem remains, yeah, the one that there's no quantum version because of the high sample complexity. Yeah, we have time and sample complexity, this, and if we have, if you use this many samples already, even with quantum tricks, we can't get the time below this bound. So what we did to fix this problem is the following. Yeah, we are wasting too many samples and what we did is essentially to recycle samples. At first, we make a pool of around K squared samples from the oracle and then we repeat the same algorithm that we've seen before, but instead of getting K new samples in each iteration, we get the samples from this pool that we created in advance. Now that the rest is the same, we get K samples from this pool of samples and try to solve for S and repeat and repeat until we found the correct S. And it looks like this. So here's the oracle, we get a pool first and from there we're sampling. Yeah, here's some errors happened so we won't find the correct S. And again, errors happened and at some point, oops, no errors will happen, all the error bits are zero here and we solve for the correct S. And the good thing about this approach is that the running time stays the same as before. Yeah, it's still one minus thought to the minus K but now we just use a polynomial amount of memory and also a polynomial amount of samples. And if you look into this in more detail and for the people who know coding theory, this algorithm which we called pooled gauss that we have just seen is essentially prongless decoding algorithm. So you can also of course use more advanced the decoding algorithms like the MMT, BJMM for the people who know, which we will also see later. And so this is one advantage that can easily be extended and now we can also do a quantum version because we just have a polynomial amount of samples now. So nothing prevents us from doing a quantum version and actually we can do it by just applying a simple Grover search. And for the people who know it, it doesn't come surprisingly that we can like, so the running time of this classical algorithm can be, yeah, we can put the square root around it in the quantum version. Okay, and now at this point, we have an algorithm that uses some amount of time but nearly no memory and samples. And now we tried to find some trade-offs. Now you want to find some time memory or time sample trade-offs. And one way to do it is the following. Bidimensional reduction. Just imagine that we want to solve this LPN problem. Yeah, we have this equation AS plus E equal to B. And now just imagine that the matrix A ends with a few zeros. Yeah, let's assume that all the small AIs have zeros in the end. If we would have this, then we would have a reduced LPN instance. In fact, because if you calculate this, you get something like this. You have this A prime, it's the same A prime as here, times a shorter secret. So we're having zeros here. We have a dimension reduced LPN instance, let's say. And yeah, now in the following, we try to create samples that ends with a lot of zeros. And that's what we call the hybrid approach. So the first step is reducing the dimension. And the second step is use some decoding algorithm, whichever you want. For example, the one that we have seen already. Or also more advanced ones. So one easy way to reduce the dimension is the following. You just query the oracle and just keep these samples that end with zeros already. So we want zeros at the end. So just query and throw away everything that doesn't end with only zeros. Now here, so the AI doesn't end with zeros. So we throw it away. The second one ends with zeros and we put it in this pool. So it's a good pool now because the vectors end with zeros already. This too, now this one gets thrown away again and this one gets to the pool. And if we have this good pool where every vector ends with a few zeros, we can sum decoding. Like the one that we have seen before. And if you balance these two steps of the algorithm like the pool generation and the decoding approach, and if you use this easy Gauss method again that we have seen before, this progress decoding is the same. You get the following theorem that LPN can be solved in faster time. Here it's also one minus tau to the minus k again, but okay, minus k over something bigger than one. So we lower the exponent. And we can also do a quantum version of this and this gets also better than the quantum version that we have seen before because we have, instead of k over two, we have k over two plus something bigger than zero again. Okay, and we don't stop here because if you remember the BKW algorithm, it essentially also did some kind of dimension reduction. Yeah, we create, we combine some samples to create zeros and now we can combine this BKW dimension reduction with the dimension reduction we have just seen. So how does it look like? Here's the oracle again and we query samples and put them in the pool first by rejecting samples. And starting from this pool here, we have some zeros created here already just by sampling. And from this point, we are just doing the BKW reduction again. So we try to combine samples. We lose a few in every iteration, but we create even more zeros. So we create an even better pool with even more zeros. So with a bigger dimension reduction and from there we do the decoding again. And what does this do now? Well, okay, we have heard about the decoding approach which uses a low amount of memory and but uses exponential amount of time. And the BKW which uses less time but also much more memory. And the hybrid approach that we have just seen interpolates between these two algorithms. And that's good because if you want to do actual experiments, like if you really want to solve LPN, usually you will be in the situation that you can't use the BKW anyway. Now if you want to solve an instance with dimension with K equal to 300, there is something in between here. You can't use the BKW, but you also have too much memory for just using decoding. So if you would just decode, you would waste memory. And with this hybrid, you can use all the memory that you have and get the best running time, better running time than decoding. And you can also actually apply this algorithm unlike the BKW. So let's see some numbers to show this. First, here's a table with a lot of LPN instances. And yeah, with various Ks from 256 to 1280 and the noise rate is diminishing one over square root of K. And in the table, you see the running time exponents including polynomial factors. And these running times are also with restricted memory. Yeah, here, that's very important. We restricted the memory to the 80 bits. And you see that for these instances with a diminishing tau, actually decoding is the best thing you can do. You can also use the BKW, but it will be slower than just the decoding approach which we have also discussed before. It has a bad dependency on tau. And the hybrid, as you can see here, it has the same running time as the decoding. Because for these instances, the hybrid just collapses to the decoding approach which is also supposed to do because it's just the best approach here. And you can also, of course, use the quantum version and then you get lower running times as one would expect. And in this table, you can also see that even 1,280 bits are not enough for, not even for classical 80-bit security. So you have to put in a little bit more like around 2,000 to get 80-bit of security. And let's check another table here. Here, we have a constant noise rate, one over four. The k varies again. And same memory restriction. And again, here are the times with polynomial vectors included. And here we see something different. We see that actually for the small instance 256, the BKW is the best we can do. That's better than decoding and it's also still applicable. And the hybrid collapses just to the plain BKW approach because it's the best it can do. And starting from 384, it gets interesting because now we can't apply the BKW algorithm anymore. Now we can apply the decoding because decoding just uses polynomial mode of memory, so very low memory usually. And, but if we have 2,280 space anyway, we can use all of the space to create a fast algorithm. So an algorithm that's faster than just decoding. And that's the hybrid. Now here we have 88, it's still executable and it's faster than decoding. And the same with this instance. And at some point, the memory can't be put to good use anymore and the hybrid just collapses to decoding again. That's what the optimization suggests. And of course, using quantum tricks like Grover search, in this case, we can also lower the running times again. Okay, and now we get these tables, they are nice, but actual experiments are always better, so we also programmed all of this. And we had a server, we had a PC with a 2,241 bits of RAM. And for the instances that we wanted to solve, the BKW algorithm was not applicable, of course, otherwise we would have used it probably. So we use decoding instead, decoding in this hybrid approach. And we decoded via a more advanced decoding algorithm with the MNT introduced by Maya, Moira, and Toma in Asia Crypt 2011. And what we actually solved were the following instances. Here was K equal to 534 and diminishing tau one over square root of K. For example, we could solve this one in approximately five days. And the pool generation took 2.5 days. So this was the method where we just draw samples and just keep the ones that end with zero. That's what we mean with pool generation here. And then we used the more advanced MNT decoding approach. Okay, and you can also see that we solved an instance with a bigger error, one over eight. So, and here we could also only just solve it with a smaller K in 15 days. And with an even bigger error tau, we could solve instances up to 135. At first we did it without using extra memory. So here we don't have any, we didn't use any BKW dimension reduction. So we could solve it in about two weeks. But in this case, we just used up to, I don't know, something like two to a 30 rum, was it, I think? So we still had some left. And if we put it to use, then we could do some BKW reduction steps and could solve it in less than one week. So here you see the advantage of this hybrid approach. And if you want to extrapolate, here this instance, 512 and one over eight is like a famous instance. It's like the cryptographers' favorite LPN instance. And if you extrapolate from this instance that we solved here, now here's the same error, one over eight, one over eight. Then on our PC, this would take like two to the 36 years. So that's why we assume that this instance is still secure, even classically. Okay, so what we have learned, well, the decoding is a good approach for small tau. It allows quantum speedups and it's memory efficient. Yeah, that means that you can actually apply this. And this algorithm also gave rise to this hybrid algorithm, this mix between, for example, decoding and the BKW, which makes things faster and which is very good for experiments. And this allowed us to first solve medium LPN instances. Yeah, like the 500 instance that we have just seen because this was not possible before. Okay, so thank you for your attention. Do we have any questions?