 Thank you for the introduction. Hi, everyone. In this talk, I will define a new learning with errors problem, which is, at least, that's hard. There's exponentially many polynomial learning with errors instances and still allows public key encryption. I want to recall some basic stuff about lattice-based cryptography. A lattice-based cryptography relies on great parts on the use of two problems. Short integral solution introduced by i-tie in 96 and learning with errors introduced by regive. They both enjoy reductions from worst case lattice problems, such as finding a short vector in a lattice, a short non-zero vector. And they also provide a lot of cryptographic applications, such as one-way functions, public key encryption, fully homomorphic encryption. However, the efficiency is not so good. They both have all the cryptographic constructions, have two major drawbacks, large keys, and slow computations. In order to improve the efficiency, two new problems have been introduced. Polynomial Cs and polynomial LWE. They both make use of the algebraic structure of a ring in order to increase the efficiency of the ring ziq of x modulo f. They also enjoy reductions from some lattice problems, such as a proxy SVP. However, these lattice problems are now restricted to a special class of lattices, such as ideal lattices, some lattices which correspond to some ideal C number fields. And one may wonder why a proxy SVP restricted to this special class of lattices is as hard as a proxy SVP for general lattices. And in fact, it's not. Some recent works show that a proxy SVP could be easy for some polynomials f. There is a quantum polynomial time of bias in song to find the generator of a principal ideal in any number field. Later on, Kramer and his co-authors studied the case of cyclotomics of prime power index. And they gave a quantum polynomial time algorithm to find a short generator of a principal ideal. So they can even find a short one, not necessarily, not only some generator. And by short, I mean short up to sum 2 to the square root of an approximation factor. Later on, they show that we can solve a proxy SVP, in fact, for all the ideals, not necessarily principal, up to the same approximation factor. For later on, some progress have been done for the case of multi-quadratics. Berstein and his co-authors show that we can find a short generator of a principal ideal in a multi-quadratic number field. So here you can see a comparison between the hardness of a proxy SVP for arbitrary lattices and for ideal lattices in cyclotomic fields of prime power index. As you can see, the best algorithm for finding a short vector in arbitrary lattices for an approximation factor of 2 to the square root 10 takes 2 to the square root 10 time, while the best algorithm for finding such a short vector in this special kind of lattices, such as the ideal lattices, takes only polynomial time. So the question which should we ask is what should be done. One idea, so we have evidence that a proxy SVP could be easy for some polynomials. In fact, easy for some polynomials. One solution would be to start to look at other kind of polynomials, such as the polynomial suggested in n true prime. Another solution would be to start to use problems, which are provably at least as hard as polynomial c's or polynomial lw for a wide class of polynomials. And in this talk, I'll focus on this second solution. The first step to study this solution was done by Lubashevski. He proposed a new polynomial c's problem, and he showed, which is closely related to the old one, and he showed that if we can solve this new problem, then we can solve many old polynomial c's instances. Just to recall, for polynomial c's, we are given k polynomials in zqfx modulo f. And we are asked to find the short non-trivial solution for the equation sum of ai times zi equals 0 modulo f. For the new polynomial c's problem, you are also asked to find a short non-trivial solution. But now you are not asked to reduce this equation modulo f. So this is the only difference between the two problems. The main result of his paper is that polynomial c's, the old polynomial c's reduces to the new polynomial c's for any polynomial of bounded degree. So the question now is, do we have an analog of these polynomial c's for the LW case? And the answer is yes. In this paper, we introduce MPLWE by making use of the middle product of two polynomials. And we give a reduction from decision polynomial LWE to decision MPLWE for an exponential class of polynomials f. Now I want to explain how do we compute the middle product of two polynomials. Suppose we have two polynomials, a of degree less than n, and b of degree less than 2n minus 1. In order to compute the middle product, we just computed the usual product. This will be a polynomial of degree less than 3n minus 2. And we take the middle coefficients and create out of them polynomial of degree n. In fact, we can take any middle coefficients, not necessarily n. You can better visualize this by using matrices. So the middle product of a polynomial a and b corresponds to the multiplication of a topletz matrix related to a, and the vector of coefficients of b, the reversed vector of coefficients of b. Using the middle product, we can now define a new distribution, which is similarly to the polynomial LWE distribution. So for the polynomial LWE distribution, we have a secret s, which is a polynomial in zqfx modula f. We have a sample from this distribution is a pair of polynomials a and b, where a is uniformly random in zqfx, and b is a times s plus sum error modula f. Similarly, we can define the middle product distribution. So where we replace the product here by the usual product here by the middle product. We can now define the middle product problem. So the middle product problem ask you to distinguish with a non-negligible probability over the choice of s. Between the distribution I have described and the uniform distribution. So it's similarly defined to the polynomial LWE problem. The main result of this paper is a reduction. We show that polynomial LWE reduces to this new problem. I have described MPLWE for any monic polynomial f of degree n, whose free coefficient is invertible modulo q and whose expansion factor is bounded. So you can see here that there is an increase in the error proportional to the dimension n and to the bound on the expansion factor. I will try now to give some intuition on this reduction, just the main idea. So what we have to do is to try to map a sample from the polynomial LWE instance to a sample of the MP distribution. We want to map uniform samples to uniform samples and we want to map polynomial LWE samples to middle product LWE samples. As you may know, a sample from the second component of a sample from the polynomial LWE distribution can be written in this way, where rotev of b is a matrix on each line, on line number i. For example, I have the polynomial b times x to the i modulo f. So this is the usual rotev matrix used in crypto. So now if we take just the first column of rotev of b and of the second and of the rotev times a, rotev of a times rotev of s plus rotev f e, if we want to take the first column, we can write the first column as a product of a matrix, mf, which is closely related to the polynomial f, the product of this matrix times the coefficient vector of the polynomial b. Now we can decompose rotev of a as a product of a top-list matrix corresponding to a and rotev f of 1. The new secret will be this product of matrices and the new error will be this one. As you can see, this one may not be uniformly random, but we can easily randomize it. And as you can see, this error may depend somehow on f. So we have to, in order to finish the proof, we have to remove somehow the dependency of the error on the polynomial f. Now I will present an application of the middle product LWE. This is a crypto system, a public encryption scheme inspired by the public encryption of regef. We take q, a nod integer. For the key generation, the secret key is a uniformly generated polynomial of degree less than 2 and minus 1. For the public key, we generate many pairs of polynomials. A i is chosen uniformly random here. And b i is the middle product of A i and s plus 2 times some error. In order to prove security, we need log q such samples. If we want to encrypt, so suppose we want to encrypt a binary message, we have to generate some uniformly random elements, which are also binary. And the ciphertext corresponding to this new message is a pair of polynomials, where actually the message is embedded only in the second component. To decrypt, we subtract c1 middle product s from c2, we reduce modulo q, and the modulo 2. So this is the encryption scheme. The correctness relies on this, let's say, associativity property of the middle product. And we can recover the message as soon as the error here is small enough for the security. First, in order to prove security, we have to replace the public key with a truly uniform one. And then we can just rely on the MPLW assumption. In order to do this, we need to have to the element 2 to be invertible modulo q. For the second part of the security proof, we should prove that the view of the adversary doesn't review anything about the secret message. So this is, we want to show that these elements are static, the distribution of these elements are statistically close to this distribution. Contrary to the original regif scheme, this element here is not uniform anymore. You can see this because, for example, if all the elements AI, for all the elements AI, we have that their smallest coefficient is 0, then the smallest coefficient of this polynomial will also be 0. And all the polynomials AI can have the smallest coefficient equal to 0 with very high probability for the choice of the parameters. So this element here is not uniform anymore. Still, since the message is only embedded in the second component of the ciphertext, we can still show that, we can still show using the generalized leftover hash lemma that these two distribution are statistically close. As open problems, one idea would be to build some other cryptographic primitives using this middle product problem. Another idea would be to start to better understand the link between polynomial LWE and ring LWE. We know that they are the same only in the case of cyclotomics, for example. But if we could have a reduction from ring LWE to polynomial LWE, if we had such a reduction, then we could rely the hardness of the new problem I have introduced on the hardness of ring LWE. And we know that there is a recent result of Pyker, and his co-authors, which says that if we can solve decision ring LWE, then we can solve BDD. And this result works for every polynomial F, not necessarily for cyclotomics. Another open problem would be to give an algebraic interpretation, let's say, of the matrix equations we have played with. Thank you.