 Dwi'n ei bod y gynllunio ar gyfer cymryd, felly ac oedd genchewch y cyfrifol iawn ar y cymryd gynchewch. Cymru genchewch wedi'u emlwysu ni ar gyd yn y Lattys i gyfrifol iawn y cyfrifol iawn gwybod. Y gallwn gwybod y gallwn 我wn d Surf зайd yn gyfrifol iawn y cymryd gyfrifol iawn. ac mae'r bwyl mawr ffordd o'r ffordd mawr yn cael eu dynnu'n gwneud, a'i dweud o'r bwyl mawr yn cael eu dynnu'n gwneud, ond mae'n cael eu dynnu'n gwneud o'r bwyl mwyl mwyl mwyl. Ac mae'n ddif positives oherwydd rhaid erbyn i'w rai'r bwyl. Felly mae'n gweithio'r ffordd. Felly mae'n gweithio'n gweithio'n ffordd o'r ddechrau, mae'n gweithio'n ddifitudes iawn i chi yn fawr, mae'n gweithio'n ddechrau, i'w ddweud o hollu'r cyfle i ddim yn ddod, ddod i ddod i cofnodd. Fe ydw i ddod i'n gwybod, mae'r gweithio rhagleniaeth o'r ddod, yw ddod i 5 argymdeithas, mae'n gweithio'r ddod, ddod i rhaed i ddod, mae'n ddod i ddod i'n ddod i'r ddod i ddod i'n ddod i ddod i ddod i ddod i ddod i ddod i ddod. Yna, yna fod y cyfle y gweithio'r ddod i ddod i ddod i ddod, that it makes sense to add and multiply the messages. What we want from a fully homomorphic scheme is that if you, there's some operation that you can do on psychotechs that we call add, which kind of, when you apply it, it's decrypt, you get the same thing as decrypt in psychotechs and then add in them, okay? So once you've got this, you can evaluate the arithmetic circuit, any arithmetic circuit over a ring. So once you've got this, you can do anything and you can do 18 possible things before breakfast and we're all brilliant, okay? So that would be very good. Okay, so let's look at gentry scheme from 50,000 feet and I'm going to get lower. Okay, so what gentry's got, he has a, what's called a somewhat homomorphic scheme, which can evaluate circuits in a given set. So imagine this is, you can evaluate circuits down to some bounded depth but you can't do any more, okay? What actually happens is every time you multiply psychotechs together, each bit of psychotechs has got a bit of randomness attached to it, which we call dirt and then when you multiply it together, they get more dirty, yeah? So the more operations you do, the more mucky the psychotechs become and eventually they just become so mucky there's no message left in them, okay? Now what gentry's scheme has is that it's what's got, it's what's called bootstrapable and that means that it's, the circuits that can evaluate contain its own decryption circuit, okay? So if it contains its own decryption circuit plus a little bit more, which allows you to do something useful for large enough areas of the security parameter, we'll come back to that, then you can bootstrap it and then you can do anything, okay? So to allow you to all fall asleep, which I think somebody has started to do already, but to really make sure you can do that, is the real problem with gentry's construction and with ours, from a practical perspective, is the security parameter has to be astronomically large to enable bootstraping ability to occur and if you look at the appendix of the paper that goes into it in more detail, so now you know you can't do it, you can all fall asleep and I'll just witter on for the next 20 odd minutes, okay? So we're going to go down to 10,000 feet, okay? So what gentry does and this is where we're going to talk about ideals and then get rid of them, is that he has a large representation of an ideal, Jay. So you take some ideal in some number field, which is essentially a lattice, everything's a lattice, okay? So it's a larger representation of ideal plus a small co-prime ideal, think of the small co-prime ideal I as always being 2 because that's what it's going to be, and then you have the private key as a nice representation of the ideal, okay? I know a bit about number fields, nice representation of ideals are usually, if you take a principal ideals, usually a generator and a horrible representation is the hermit normal form of the ideal, so you get that kind of idea. So to encrypt you take a message in the ring modulo the ideal I, you add on some small random element from the ideal I, that's to kind of make it semantically secure, you know, to randomise the encryption, reduce it to the big ideal Jay, the stuff you add on little I has to be large enough to give semantic security, but small enough to enable decryption to work, so you have some sort of goldilocks dilemma, it's not allowed to be too hot, it's not allowed to be too cold, too hard to soft, whatever. So it's got to be just right and then everything works. And then to decrypt we use the nice representation of the ideal to decrypt. So for the rest of the talk we're going to ignore lattices because, okay, so why is it, well, well, apart from this slide, and maybe the next one. Anyway, so what we do is it's homomorphic because if you take a psychotech, which looks like, whoops, where did I go? Well take a psychotech, which looks like one of these things, n plus little I, and because of the fact that we're dealing with ideals, if you multiply something in an ideal by something else you end up backing the ideal, so everything kind of cancels and so adding two psychotechs together gives you something that you, just like an encryption, and multiplying two psychotechs together gives you something that looks like an encryption, so everything looks like an encryption, so everything, so it's very homomorphic, well no, so it's somewhat homomorphic, okay. However, the muck has got muckier, okay, so the muck is this bit of volume we've added on, you can see when you multiply it gets really horrible, so you've got this horrible expression, this horrible expression is getting bigger and bigger and bigger, and that's where your problem's going to rely. Okay, so let's go down a bit further. So how do you represent ideals in number fields? Okay, so in ideals there's a Z module, so we can represent it by a basis, so an ideal is an lattice, and elements in the ring we can represent as vectors or polynomials if you like polynomials, so the public key becomes a matrix representing the basis of the ideal I, and the matrix Bj representing the big basis of the ideal J, and the secret key is a small matrix, not small in terms of dimension, the small in terms of the entries in it. Okay, let's go down a thousand things. Okay, so if we look at Gentry's scheme again with the British matrices, so we take some vector, okay, which is in the ring module of the ideal, we take some random stuff, multiply it by Bi, and then reduce it modulo Bj, and when you take a vector and you reduce it modulo a matrix, you essentially perform this operation here where you take B at first time to develop a vector, and then you round all the coefficients to the nearest integer, and so on. Okay, so this is a standard stuff in number theory, okay, so it's kind of a weird way of doing stuff in number theory, but it is standard in number theory. Okay, so if you've seen Gentry's scheme before, and you've got to the bit before it gets really scary, that's how far you've got. Okay, so we're going to get there and make it understandable to mortals, and make it implementable by what we're going to do is perform four specialisations. Number of these specialisations are mentioned in Gentry's thesis, so we're not claiming at all that these are magically new, or anything, we're just kind of combining them together and going, oh yeah, duh, right. So each one results in a computationally more efficient scheme. In addition, what we do is we actually get greater functionality than what Gentry can do. So Gentry's scheme actually is the way it's described, only allows you to do fully homomorphic encryption, assuming key sizes are astronomically large on bits, whereas we can do it on essentially arbitrary finite fields and characteristic two. So you get something better if you could have arbitrary large key sizes. And we reduce the bandwidth as well a bit. It's okay. So specialisation one is we take the ideal J to be principal, and having the basis SJ essentially being the principal generator, because that's the kind of natural thing you would do if you were looking at this from a number field point of view. So algebraic number theory. So we do that. So what that means is the matrix SJ, which was an N by N matrix, now gets replaced by a polynomial i.e. a vector. So we've already reduced the key size a bit. Gentry suggests taking the ideal I to be the principal ideal generated by two. That's a very good idea. We're going to do that as well. Okay. Specialisation three is that generally in computer algebra systems, if you take a computer algebra system that can deal with number fields, for example, Parry, Kant, Magma, blah, blah, blah, okay, you do not represent ideals normally by a matrices. The best way of representing a general ideal in a number field is by what's called its two element representation. Okay. So if you take its two element representation of the thing, the public basis now just becomes two elements in the ring. So instead of having an N by N matrix, what you actually have is you have an integer, which is usually the norm of the ideal, and some monic polynomial. So this doesn't seem to gain much, since if M is equal to N, we tame the same thing as the Gentry stuff. So if we choose this monic polynomial, which generates an idea, so if we choose ideal J special, okay, then so special that we can have N equals one, then essentially the public basis of the big ideal J becomes two integers. Okay. A prime, so if we select it, so if we get norm of this ideal J is a prime, we do that because it makes things much more litre, but we can get rid of that with recent work we haven't yet written up. Then the public key becomes the prime P and the polynomial X minus alpha. Okay. So this is the standard representation of a degree one prime ideal in the number field. So now we could just write the entire scheme down using polynomial arithmetic and reduction module, so you go, well how do I do this? I have to do reduction, which is inverting matrices and stuff like that. Well reduction module, as I said, this is a very, it's not the way people in number theory think of reducing things, reducing elements, modular ideals. So actually the reduction becomes very, very clean if you have this representation. Okay, so we're going to have some, we're going to do everything in terms of polynomials, or you can think of vectors of coefficients, so we're going to have polynomials, and because we're dealing with polynomials, we're going to mainly be using the infinity norm rather than the two norm. Okay, because the infinity norm seems to make more sense if you're dealing with polynomials. Okay, so we define some balls, so there's the ball, infinity ball arranged around the origin, which is of a polynomial of degree N minus one, and the one with positive coefficients. Okay, so let's look at our scheme. This is key generation, and this is a dog's dinner, this is horrible. Okay, so key generation is bad in the scheme. Okay, so what you do is you take the plain text space is going to be binary polynomials, that's quite standard. Now I'm going to take a monarchy-adjustable polynomial F, that's going to define our number field, and then we're going to repeat this loop. Now what's this loop doing? What this loop is doing is essentially coming up with G represents a element in the number field which has prime norm. So this is generating an ideal which has prime norm. Okay, now what we want is we want, so because it's got prime norm we know that there is a common root between G of x and F of x modulo p. So we now have to find this, so we find this root by taking this GCD modulo fp, and then we have to find the private key which is essentially forming the inverse element, so if G represents an element in the number field defined by F, then the polynomial z, or actually z over p will represent the inverse element in the field. Okay, so this is just purely read, co-in, come up with an element in the number field, invert it, the element's got prime norm, find out the two element representation of that prime norm. Standard algorithms will give you that key generation methodology. Okay, so that's ugly, you're not going to understand that, but that's where all the magic is. So that magic allows us to have rather quite funky encryption decryption algorithm. So the encryption algorithm is really stupid. What you do is you take your message, you add on a random polynomial times two, and then you evaluate that entire polynomial to alpha modulo p, and you output that integer. Okay, so to encrypt a message, you take a polynomial which is your message, right, you add some stuff on, and you evaluate the polynomial to give you an integer. To decrypt what you do is you just take your ciphertext, you multiply it by a magic polynomial b, and you take the result, divide by p, round to the nearest integer, subtract from c, take result mod two, and you get the polynomial back you started with. So you take polynomial, you convert it to an integer, and then to decrypt, you convert your integer back to a polynomial. Okay? Which is kind of weird. So that's the reason that works is because of this magic here, but when looked at with the right glasses and when looked at when you make the right specialisations, this is Gentry's scheme. Okay? So there's no difference in any meaningful sense. Okay? So to add ciphertext what you do, just add the numbers, multiply ciphertext what you do, multiply the numbers. Okay? So that's quite nice. So good properties of this scheme is that decryption has been multiplying an integer by a polynomial. Okay? The integer by a polynomial means that every component of the message polynomial that we get out at the end is independent in that operation, which means that we can apply the re-encryption mechanism of Gentry in parallel and therefore re-encrypt each bit individually which would enable us to do fully homomorphic encryption, if we have keys like large enough, for arbitrary binary polynomials as opposed to just 0, 1 polynomials. If you deal with arbitrary lattices, then the decryption procedure which is the matrix times the vector mixes up all the coefficients as you do the decryption procedure here, whereas what we've got here is a very, it's essentially an independent decryption on each component of the message space can be done in parallel, therefore we can do a re-encryption in parallel. So we get much more functionality. So this means if we choose the F properly, this means we can get fully homomorphic encryption over the field F2 to the N. By clever choice of F, we could get SIMD, fully homomorphic encryption if we had enough fields that we embedded together, so you can do many homomorphic encryptions in parallel if you so wished and the key sizes were big enough. Okay? Hence you can do any subfield, you can do all sorts of stuff, right? So there's some parameters in the stuff which says that this, the scheme which I've got is only somewhat homomorphic, the same bootstrap procedure, so you can only cope the circuit to the certain depth. Bad properties is the depth is quite small. Okay? So we work out in the paper to get a fully homomorphic scheme you need a depth of around 7 or 8. Okay? That means you need to get, be able to do 256 multiplications one after each other. In practice we can only get 1.7, a depth of 1.7, which is about 3 in a bit multiplications, okay? And even then your prime P has 92,000 bits. So that's quite big. But at least encryption and decryption is efficient, just takes hours to compute the keys. But we're working on that, okay? Generally schemes got the same problem but even more so because encryption and decryption are just more complicated, okay? So again see the full paper for analysis of this problem. Security, this is what's really cool because this answers a long-standing problem on the number theory list. A long-standing problem on the email list of the number theory list is can you come up with a cryptographically interesting encryption scheme which is based on a number theoretically interesting problem? Okay? So factoring is not considered a number theoretically interesting problem for DLP, it's not proper number theory, okay? So this answers this question. So you've got different problems, consider Elgamal, DDH is semantic security but computational number theories don't care about DDH. CDH is message recovery, computational number theories don't care about CDH. But DLP they kind of consider that, okay? So we said it's not enough just to give yet another hard problem on which to justify semantic security because that's what we do with pairing-based crypto, okay? So what we're going to do is going to give some sort of well-studied problems to a key recovery as well which are kind of cute and interesting, or at least to us. Okay? So semantic security is based on what we call the polynomial coset problem. This is just to upset all the people in complexity theory because PCP means something different there. And so what it is, and also it's to do with what Gentry calls, I can't remember, something coset problem. Something, the what one? Ideal coset problem, that's it, yes. So there is a bit of logic behind the name. So it's basically, if you evaluate a polynomial around a point, can you distinguish it between that and a random point? So that's semantic security and the proof is trivial that semantic security reduces to this problem. Blah, blah, blah. So that's, I mean it says ideal coset problem. Okay? So you have to get the balls right to make sure it all works. But that was fine. Message recovery, that actually is the best known attack we've got on the scheme. But then again we've only looked at it is to apply lattices. So they are kind of related to lattices. And it's a pretty standard lattice problem. So we have that. Okay, key recovery, this is the more interesting thing. So key recovery is that if I give you the standard number theoretic representation of a degree one prime ideal in a number field can you come up with a small generator of that ideal? That's a proper number theory problem. And this is a proper cryptographic scheme ish. So they're kind of nice and related and it'll keep number theories happy. So that's quite nice. And there's two methods to solve this problem. Method one, there's a sub-explanational class group algorithm which you then apply to obtain the sub-fundamental units and class groups. And then you have to smooth the ideal GAO for this factor base and then you obtain generator. So this is, so method one is essentially the technology you use in factoring. Okay? Or fact, yeah, kind of discrete logs and factoring that apply to class group calculations. And it's unclear whether the last step here could be done in sub-explanational time because the fundamental units are often impossible to write down. So you have to write them down in some funny representation, be able to write them down in polynomial time because they would take exponential time to write down for a general field. So there's all sorts of, this is kind of conjectural but certainly it should be sub-explanational morally. Okay? Method two is that there's an old method due to Poekman in his thesis which is based on the baby step giant step method. But this method has exponential complexity in the norm of the ideal. So if I've got, it's a bit like discrete logs. If I've got baby step giant step methods or you've got index calculus, okay? So it's an interesting number theory problem to determine where the key recovery can be done in sub-explanational time. So if there are any number theorists in the audience, this will justify your research for the next few years. Okay, so in conclusion we have a variant of Gentry scheme which is essentially a specialisation of Gentry scheme has smaller psychotech size, it has smaller key size. It could be made fully homomorphic over F2 to the end as opposed to F2. Key recovery is related to a well-studied problem in classical computational number three. You know, well-studied 200 years of, you know, this goes back to Gauss this problem, okay? This is not some problem that cryptographers have come up with, you know, the ABCD pairing problem or whatever. It's a problem. And it can be described without resulting to lattices. So hopefully this will aid some of you in work on future, on this scheme or other schemes because everything is now understandable to mirmol tools. But take a message, it's not a practical fully homomorphic scheme. I think the question you want to ask is being a politician is how much work would it be to make it fully? And essentially there's two problems. One is you have, so you can think of Gentry's scheme as a mechanism to clean ciphertext and when you multiply ciphertext they become very dirty. Now, if you can make the stuff get less mucky and get a better form of detergent, then you can kind of do it. And so there could be some breakthrough tomorrow or not. But it's marginal. I mean if you made the dirtiness go down by half the bit length and you had a small improvement in the cleaning, then it would be practical. So that's halving the bit length. It doesn't sound too bad, does it? So it's plausible. It might be, for example, if you pick the polynomial F very carefully, like something very weird, that you actually do halve the bit length of dirtiness because that parameter actually comes from polynomial F. So it might be some very obscure choice of the polynomial F, which gives you exactly what you want there, which when you combine with, there's been a paper on e-print in the last few days which simplifies asymptotically the cleaning process a bit. And then you combine those together with some other ideas. You know, it could be possible. But we don't know how to do that yet, practically. But in the paper there's runtime. So this is real. You can evaluate, so this is a practical scheme that you can evaluate bigger circuits than bgns, say. And with our new scheme you can do eight multiplications in a row, which is a depth of three, which is not in the paper. That's future work. Thank you.