 Okay, thank you very much for the introduction. So to give you a quick overview of our results, so what we do is break a new additively homomorphic encryption scheme that was introduced by Sean Lee and so at SCMCCS 2014. So actually, so we give three different attacks, slightly different attacks that give you various information. So they are based on orthogonal lattices or copper Smith techniques depending on the attack. So we defeat both the symmetric and public invariance of the scheme. We defeat, so it's based on a new assumption called CoACD and we also break more general instances of that, more general than those that are used in the construction of the scheme themselves. So most of the attacks are heuristic. So the first one, you can probably prove them but we didn't bother. The other ones are mostly heuristic. But in practice, so the authors propose parameters and for those parameters, it's really easy to break everything. And it's easy to see that the attack has still worked for until very, very large parameters. It makes the scheme much less interesting that was thought originally. And so you may notice that Sean Lee and so are also authors on papers that attacked the CLT multinear maps. So that's totally unrelated to the fact that we are attacking the paper. So, okay, so additive leomorphic encryption, it's a quite useful primitive. So use it in many protocols, e-voting, PIR, computational sourcing and so on. And we don't have so many construction of it. So we have some, of course payet and shoo and other, you can use it, base it on an shoo and others. But most of them are not so efficient and so having more efficient options would be very nice. And so the authors, Sean Lee and so observe that you can construct it very easily from the approximate common divisor problem. But you need a huge parameters to make the problem hard. So as a result, it's not very efficient. So their idea is could we tweak the ACD problem in such a way that you still have a simple scheme for additive leomorphic encryption but with much better parameters. And so interesting efficiency. So here's the way you can construct. So here's first the ACD assumption. So basically, so you take some integer which is the product of some moderately large primes and some large number Q. And you ask whether you can distinguish between a number, the CRT here, which has a small reductions module of the prime PI and a number which is completely random. So that's the decision version of the ACD assumption. And so if this problem is hard, then it's easy to construct additive leomorphic encryption based on it. So what you do is, so the factors of this number X zero are the, is the secret key. You fix some constant Q and what you do is to encrypt some message which is the consist of numbers less than this constant Q. You simply add Q to some Q times E1, Q times EN, et cetera, which are random elements plus some randomness here to complete it and this number C. So if the assumption above is hard, then this should be distinguishable from random if you don't have the secret key. So this is in CPA if the problem above is hard. And to decrypt, you just, to recover MI, you just reduce this mod PI and then reduce mod Q. Okay, so with CoACD, CoACD, so it's kind of a dual, though I can't make that statement very precise. It's kind of, it looks like a dual of the ACD assumption. So it's the reverse. So instead of combining elements with the CRT, you kind of separate them. So what you do, the assumption says that, so you fix some primes P1, PN and some constant Q. And the assumption says that relatively small randomness E multiplied by Q, mod P1, et cetera, mod PN is indistinguishable from the same vector of elements less than P1, less than PN if you don't know the primes PI themselves. So of course the randomness E needs to be larger than at least each of the PIs, or this to make sense, but not too large either. So it's larger than the PIs, but less than the product or something like this, okay? And so then you also get similarly a homomorphic encryption scheme basically to encrypt message M, which is less than Q. You just add it to, you give out the vector of elements M plus EQ, mod P1, mod P2, et cetera, mod PN. And so to decrypt, you compute the back, the element mod the product using the Chinese randomness theorem, and you reduce mod Q and you retrieve M, okay? So those schemes are sort of similar in many aspects. So for example, these are symmetric key schemes that you can convert them to public key by giving out as a public key the product of the PIs and many encryptions of zero, so that you can sample randomness. So the ACD-based scheme, so as I said, it's very efficient because for security you need cybertext that are very large, like millions of bits, but what John et al said is that the co-ACD version should be secure with much smaller parameters. So they gave out, so Q is like 256 bits and the primes themselves are two of them and they are like 1,500 bits. And for such a choice of parameters, so actually CLS encryption becomes quite efficient. It's much faster than Payet. But unfortunately, it's not that secure. So what I'll describe is two of our three attacks. So the first one, for simplicity, I only consider the case of two primes, which is the one considered in the original paper. But in our, if you read the paper, we generalize that to more primes which breaks also the assumption. So the first attack says that if I give you a few known plaintexts, then you can decrypt everything. So this breaks the one-wayness of the encryption. So I'm considered the symmetric version, but of course it also breaks the publicly version. And it breaks the decision version of the co-ACD problem and it's based on autogonal lattices. So how does the attack look like? So let's say, so I have C1CT, safer text corresponding to messages, M1MT, and I know all of those messages except the first one. And I want to recover M1. So what I do is write those safer texts. So there are pairs, something reduced mod, so M plus CQ, reduce mod P1 and mod P2. And I put all the C, the first pairs reduce mod P1 in some vector C1, and the pairs reduce mod P2 in some C2. And so I have those expressions, expressed as vectors of T components. So then I consider, so we introduce U, which is a short vector, autogonal to the difference between those two. Okay, so what I observe is that I'll be interested in this vector in particular, which is M plus EQ minus C1. And so this vector is a multiple of P1 and it's C1 minus C2 minus a multiple of P2. So this means that if I take the scalar product between V, U and V, then this scalar product is divisible both by P1 because V is already divisible by P1 and also by P2 because the scalar product is zero times plus a multiple of P2. So it's divisible by the product. But so UV is divisible by this product, but the component V itself is much smaller than N. So as a result, if U is small enough, then UV should be zero over Z. And so in particular, so since V is M minus C1 minus EQ, if I reduce this mod Q, I should get that the scalar product between U and M minus C1 is divisible by Q. And recall that the components of M are less than. So as a result, so recall that I know U because I can find the short vectors in lattices. I know C1 and I know all the components of M except the first. So I get a nice linear relation in the last unknown components of M mod Q. And so we can recover it at least if certain GCD condition is satisfied. And otherwise, I can try another U or something like that. And so you can compute the success condition on the basically the dimension of the lattice. It's pretty easy and you find that at least the short vector exists if this is verified. And so for known parameters, the parameters proposed by CLS, they propose several for 128 bit security. And then it always works for T equals four. So three known plain texts. And I retrieve decrypt any ciphertext with a 100% probability attacks in few milliseconds. So you can actually pick parameters so that of course you can, so finding a vector. So if the parameters become big, the lattice dimension becomes very big. And you have a loss in what you can expect to do with the lattice reduction. So if you pick your primes very large, then you can ensure that it's invisible to find a vector which is short enough in the lattice. But for this you need the primes to be at least something like 400,000 bits minimum. And so of course this defeats the efficiency purpose of the scheme. So the second attack that I won't describe in details is slightly different. So it doesn't require any known plain text at all. But it only lets you decrypt small messages. So if I give you a vector of small messages, then I decrypt and I can decrypt all of them without having any known plain text. And it uses this time doubly orthogonal lattices. So as a result it's very heuristic. But it works very well in practice. You can generalize it in larger n and so on. And so the last attack. So it's say on the public key variant of CLS or the search version of the CoACD assumption. So what it does is, so in this public key variant of CLS, the public key is the product n equals p1, p2 and say encryptions of zero. And so what we show is that this data, just the public key, is enough to recover both of the primes p1, p2, to factor n. So you get a full recovery with just the public key. And so as I said, it breaks the search variant of CoACD. You can generalize it for larger n and so on. And what it uses is a combination of usuality reduction together with copper Smith techniques. So it works like this. So what you have is a t encryptions of zero. And your goal is to recover p1, p2, and you know n. So we do a similar thing as before. So we put the first coordinates, the c1s, the things that reduce mod p1 into a vector large c1. The things reduce mod p2 into a vector large c2. And so these are respectively the reductions mod p1 and p2 of e times q. There is no message because the message is zero. And as a result, so if you write the CRT relation, what you find is that e times q is c1 minus c2 times this thing, it's not p1, it's p1 bar, it's the first CRT coefficient plus a c2 mod n. So what you find is that e is a linear combination mod n of c1 minus c2 and this thing. So we introduce some lattice here, which is generated by the dimension t plus one generated by the rows of, so c1 minus c2 and zero, c2 times q minus one mod n, n2 to the row, and diagonal of n's, so it's a sub lattice of, so it's a sub lattice of, so in those t first components, it's a sub lattice of n times zn, n times zt. And so it contains as a short vectors c1 minus c2 and this thing v2, so as I said, e is a linear combination of these two, so it contains as a short vector this one. So what you can show is that for t large enough, so those two vectors are much shorter, or are expected to be much shorter than all of our independent vectors in the lattice. And so if you do a good job at lattice reduction, you should find that the first two vectors of your reduced basis should be v1, maybe up to sine, and so the second one, x2 should be, such that v2 is x2 plus a small multiple of x1. And so that's how I introduce Coppersmith here. So if I look at, let's say, I look at the first component of the first ciphertext and it satisfies that it's c11 minus qe1 is a multiple of p1. And so c12 minus qe2 is a multiple of p2, so the product is a multiple of n. And so this is e1 and this is e2. This is e1, this is e1. So as a result, this coefficient alpha here, which is small, is a small root of this polynomial here, for which I know all of the coefficients. And as a result, Coppersmith's theorem tells me that, so I can find the small roots modulo some numbers with a known factorization of the polynomial. And so I can find alpha. And as a result, I can find the vector e. And as a result, I can factor n by computing the gcd between c1 minus eq and n. So you can check the lattice dimension for this to work. So that's the condition. And so for parameters in the original paper, you find that t equals three is actually enough. So it necessarily breaks all instances of the public scheme. In practice, running the attacks takes less than half a second with a 100% success rate with a naive implementation. And you can extend it to larger n. So in that case, you need a variant of Coppersmith's theorem due to Alexander May. And it's also very efficient. But maybe not so efficient, depending on which parameter you check. So it's useful to rely on an improvement of Coppersmith's computation due to B at R at PKC last year. Okay, so as a result, so to conclude, so what we did is break CLS encryption and crazy problem are pretty much broken. So you can still obtain safe parameter choices, probably, at least to resist this attack is possible, but they are too large to be of a much practical interest. So actually, so this paper came after another one where we broke with a Tancred, where broke a more naive actively homomorphic scheme by those authors, by PIR. So they also claimed a very efficient actively homomorphic scheme, but unfortunately it was very insecure. And so it seems that it's a pretty hard problem, but I'd like to say that it shouldn't keep us from looking for new interesting assumptions, which would be interesting crypto. And actually, so we don't have so many papers recently that introduce assumptions, especially give security parameters for them and give work for the crypto analysts. Thank you very much.