 Hi everyone, I'm Irona Roška and I'm going to present our paper MP-STYNE, a signature from Small Secret Middle Product Learning with Errors. This is a joint work with Shiba A. D. Payandas, Ryo Hiro Masa, Amin Saksad, Amin Ansteller on Steinfeld and Zenfei Zang. The goal of my talk is to describe a digital signature scheme to secure it in the quantum random-oracle model, relies on the hardness of solving the approximate shortest vector problem in many lattices which depend on some polynomials F. The main ingredient of our construction is a reduction from polynomial learning with errors for an arbitrary defining polynomial F among exponentially many to a variant of middle product learning with errors that allows for small secrets compared to the working modules. Here is the plan of my talk. First, I will recall some background on digital signatures and lattice-based crypto. A digital signature is a protocol between two parties, a signer Alice and a verifier Bob. Alice holds a secret key and publishes a corresponding public key. When Alice sends a message to Bob, she uses her secret key to also send a signature of the message. Using the public key, Bob can verify that the message originated from Alice and it was not motivated in transit. We say that a digital signature is correct if any properly generated signature passes the verification with high probability over the random coins used by the signing and verification algorithms. The security notion we are interested in this paper is the following one. We say that a digital signature is unforgeable against chosen message attacks in the quantum random-oracle model if no adversary, having quantum access to the random oracle and classical access to the signing oracle, is able to produce a signature for a new message. In lattice-based cryptography, switching from short integral solutions and learning with errors to their polynomial variants, polynomial Cs and polynomial learning with errors has efficiency advantages. On the security front, polynomial Cs and polynomial learning with errors also injure reductions from worst-case lattice problems such as a proxy SVP. But in their case, the proxy SVP is restricted to a class of lattices that correspond to ideals of Z of X modulo F. We call these lattices ideal lattices. In a series of works, it has been shown that for certain polynomials F, like cyclotomics or multi-quadratics, and for certain ideal lattices, the corresponding a proxy SVP problem is easier than the general one. To mitigate the risk of choosing a polynomial F for which the corresponding a proxy SVP problem is easy, Rybaszewski introduced a variant of polynomial Cs, polynomial Cs over Zq of X, which does not depend on any F and which is at least as hard as polynomial Cs for exponentially many Fs. He showed that this problem can also be used in crypto. He built a digital signature scheme whose security proof relies on the hardness of polynomial Cs over Zq of X. Later on, we managed to describe a PLWE analog. We introduced middle product learning with errors and we proved that this new problem is at least as hard as polynomial learning with errors for many Fs. Since its introduction, middle product learning with errors and different variants of it have been used to build public encryption, identity-based encryption. Now I want to recall the definitions of PLWE and MPLWE. PLWE is defined using a polynomial F of degree n and is based on a specific distribution. For a fixed secret S in Zq of X modulo F, this distribution outputs pairs of polynomials A, B, where A is uniformly random in Zq of X modulo F and B is A times S plus E modulo F, where E is drawn from a H1 distribution. The polynomial learning with errors problem asks you to distinguish between the above distribution and the uniform one is not negligible probability over the choice over the secreted S. The middle product learning with errors problem has been defined similarly. Still, here there is no polynomial F involved. The problem is based on a distribution which for a fixed polynomial S outputs pairs A, B, where A is uniformly random among polynomials of degree less than n, and B is computed as A middle product S plus E. I recall that the middle product of degree D of A and S is computed in a following way. You take A and S, you multiply them, you get a new polynomial, you extract the middle D coefficients and then you create out of them a new polynomial of degree D. The problem asks you to distinguish between this distribution I have just described and the uniform one with non-negligible probability over the choice of S. The main result in RSS S was a reduction from polynomial learning with errors to middle product learning with errors, where the error was sampled from a continuous Gaussian and the secrets were sampled uniformly in the corresponding sets. One natural way to back the hardness of middle product learning with errors with small secrets would be to fill this gap here to give a reduction from middle product LWE with uniform secrets to middle product LWE with small secrets. Similar to the reduction which works in the corresponding PLWE case. Unfortunately, the trick which works in the PLWE case doesn't work anymore in the middle product LWE case because the middle product operation does not support an inverse. Instead, in this work, we take this path here and we manage to reduce PLWE with small secrets to MPWE with small secrets. So here is the summary of both RSS S and this work. In this work, the secrets and the error come from discrete distributions which produce small elements with high probability. And as you can see, in both cases, the reductions involve some parameter losses, but I will not go into the details now. However, what I want to discuss now is the proof idea in RSS S. Suppose you start with a PLWE sample AB where B equals A times S plus some error E. This sample can be rewritten in terms of matrices like this using the root F matrix. Then you take the first column in both sides. You rewrite this first column using the special matrix MF which depends on F. Now you can split the root F of A matrix into a product of a top-list matrix which depends on A and the root matrix which depends on one. Now you multiply the root F of one by MF and by S and you rename this product as S prime. What you get in the end is B prime which can be written as a product of a top-list matrix associated to A and the secret vector S prime plus some error term E prime. But the product of a top-list matrix with a vector corresponds exactly to a middle product operation. So what you get in the end is that B prime is actually A middle product S prime plus E prime. What you have to do now is just to re-randomize the secret and error in order to make them independent on the polynomial F. The small secret PLWE to small secret MPLWE reduction proof follow the same blueprint. Still in this case there is an extra technical difficulty that I'm going to describe now. Remember that both the secret and the error get distorted by the matrix MF. When we re-randomize them in this case we have to approximate the sum of two discrete Gaussians by a new one. For this we need a lower bound on the smallest singular value of the matrix MF. In this paper we describe an exponentially large family of polynomials F for which we manage to lower bound the smallest singular value of the corresponding matrix MF. This family is a little bit smaller than the one in RSSS but is still exponentially large. For the polynomials in this family the matrix MF has a very nice form. It consists of four blocks, two diagonal blocks, one upper triangular block and one zero block. Compared to RSSS in this new reduction the noise amplification is a little bit larger and the parameter alpha becomes dependent on the family. We go now to the main contribution of this paper, our digital signature based on middle product learning with errors. Actually we first build an identification scheme based on middle product learning with errors and then we upgrade this identification scheme to a digital signature scheme using a Fiat Shamir. An identification scheme is a three round protocol between a prover Alice and the verifier Bob that allows Alice to prove its identity to Bob. The prover uses its secret key to obtain an initial message W we call this message commitment. Sends this to the verifier. In response the verifier sends a challenge C uniformly chosen from a set that defined by the prover's verification key. Then the prover computes a response and send it back to the verifier. At the end the verifier takes a decision. He accepts or he rejects. An identification scheme is secure if no adversary having access to multiple transcripts is able to fool the verifier. In our middle product learning with errors identification scheme the secret key consists of a pair of small secret and small error. And the verification key consists of a middle product learning with errors sample computed using S and E. So the verification key is A and B where B is A middle product S plus E. Alice chooses small elements Y1 and Y2. Sends a commitment W to Bob. Bob answers back with a challenge and in response Alice computes two elements Z1 and Z2. And if they don't reveal any information about S and E about her secret key she decides to send Z1 and Z2. To Bob as a response. Otherwise Alice just aborts the protocol. When Bob wants to check the identity of Alice he has to check these two properties here. When we upgrade the identification scheme to the digital signature using Fiat Shami the signer acts as a prover running the identification protocol by himself. That is when Alice wants to sign the message M she generates Y1 and Y2. She creates W as A middle product Y1 plus Y2. She generates a challenge C by applying some hash functions to W concatenated with M. And in order to make the signatures Y1 and Y2 independent on the secret key she performs some rejection sampling before outputting the right signature. When Bob wants to verify the signature he computes W and checks these two conditions here. The correctness proof of the protocol uses the associative property of the middle product. I have to mention that a digital signature similar to the one I have just described has been proposed two years ago. Still their security proof is wrong because they incorrectly assume that A middle product Y is uniform for fixed Y and uniform A. We managed to prove that our digital signature scheme is tightly secure in quantum random oracle model using a result from EuroCrypt 2018. Here are some sample parameters for our scheme for different quantum security levels. They are chosen accordingly to the best known attacks with the core SVP hardness methodology. In the column one the parameters satisfy both classical and quantum level one is the requirements. Compared to Lubaszewski's scheme at the same security level we managed to shorten the size of the signature by two and the size of the secret key by eleven at the cost of doubling the size of the public key. Our security proof is tight while the security proof of Lubaszewski is not. As an additional contribution in our paper we give an efficient key recovery attack on Lubaszewski's digital signature when the signature is instantiated with a secret key with very small coefficients. As a consequence you cannot decrease too much the size of the secret key in Lubaszewski's signature scheme if you want to improve it. Here is a summary of my talk. In our paper we proved hardness of MPLWE with short secrets. We built a digital signature scheme with security in the quantum random oracle model is based on it. We provide concrete parameters for our scheme and we also provide a proof of concept implementation in SAGE which is available online. Thank you for your attention.