 All right, so, right, so I'm gonna talk about, well the title of the talk is One Shot Verifiable Encryption from Lattice's. If you don't care about verifiable encryption, you're probably not the only one. So the talk is really about zero knowledge proofs. But it has applications to verifiable encryption. Okay, so zero knowledge proofs are, well, they're important, right? So sort of the most basic way to express it is you have some relation, some function F, and F of S equals T, S is your secret, F and T are public, and you wanna prove that you have knowledge of S in zero knowledge, right? So for example, discrete log, you know, you have your generator G, and you have G to the S equals T for some secret S. You wanna prove that you know S in zero knowledge. So for Lattice problems, so that's what, you know, it looks like for discrete log, it's exponential engines. For Lattice problems like CIS and LDE, what you wanna do is to prove knowledge of a short vector S so that F of S equals T. So here's an example, right? So this is the CIS problem, SIS. So here the function F is sort of defined by this matrix A, which ran the matrix, here it's mod 17. So I'm given the matrix A and A times S for some secret S, which is T. And I wanna prove in zero knowledge that I know this vector with small coefficients so that AS equals T. This is the CIS problem, LDE problem is very similar, so people think there's a difference, there actually isn't. It's just the matrix has the ones at the end. So you wanna, you're given some matrix A, which one part of it is random, one part is the identity, and you wanna prove that you know a short vector A times S equals some T, right? So these are the kinds of things you wanna prove in zero knowledge when you're dealing with lattice constructions. So now I wanna move from, you know, now before the matrices were over the integers, I wanna move to polynomial rings, so it's a little bit more efficient and sort of our results mostly apply to them. They're more interesting in that framework. So I'm gonna define a polynomial ring, so this ring R is just this polynomial ZQX, so ZX mod X the D plus one, it's a polynomial ring, which is just polynomials that have addition modulo Q, but also the multiplication, you have reduced things mod Q and also mod X the D plus one, so if you've seen ideal lattices, we always work with rings like that. So here's just, I'm just rewriting the SIS problem over the ring R, now the matrices instead of these guys being integers, they are elements in the ring, they're just polynomials in the ring. So I wanna prove to you, so given A and T and FA of S, I wanna prove to you that I know small polynomials with small coefficients, so that A times S equals T. Okay, so now constructing zero-knowledge proofs, so for discrete log relations, we know how to do this, this is just the Schnorr protocol, just a simple sigma protocol and can be made non-interactive via the Fiat-Chemier transformation. So for lattice schemes, unfortunately things become a lot more complicated and the main obstacle is that we don't just have to prove that this algebraic relation that F of A equals T, we have to, F of S equals T, we really have to prove that the secret that we know has small length and this causes sort of all the problems that we have with zero-knowledge proofs and lattices. So this is the sort of, have some ways to go around it, right? So this is the, this Fiat-Chemier with the Bortz technique. So let's say you have a relation F of S equals T, the way you do it and you really use the Fiat-Chemier, you use the sigma protocol framework with the, then use Fiat-Chemier to make it non-interactive, but there are some differences. So you generate this first, you generate this masking parameter, that's the intuition, Y is you generate a masking parameter Y from some distribution. Then you compute F of Y, that's your W, you send that W, so I'm gonna do an interactive protocol first. You send your W and then the verifier replies with a challenge C in the ring and then the prover computes S times C plus Y. So without this sort of last rejection sample thing, this should look, this looks exactly like Schnorr protocol, if you're familiar with that. So now this rejection sampling becomes necessary because you don't want Z to leak information about S. You never have this problem with discrete log-based things because your Y can just be completely uniformly random and so Z is gonna be uniformly random and it's okay, but we can't do the same thing for lattices because like I said, we wanna prove knowledge of short solutions. So this better be short. So then Y should be short and then this is gonna leak some stuff unless you do rejection sampling, but this actually, this is not the main point here, but you can do it and then you end up, the verifier ends up checking that norm of Z is small and this F of Z equals TC plus W. So you need this function F to be this module homomorphism just like you need it for a discrete log, right? You should have that, it should be linear, so F of X plus F of Y is F of X plus Y and also for any element C, F of X times C should be F of X, C. Okay, so if those two things are satisfied, you just get this relation and it's everything's okay. Again, just like Schnorr. Good, so the proof, the security proof works as follows. It's just rewinding, then why is this proof of knowledge? Because you're rewind, you send different C prime, the prover sends a different Z prime, you recover a different, you have another equation, F of Z prime is TC prime plus W, you subtract and you end up of F of Z minus Z prime equals T times C minus Z prime. If you're working with this discrete log, you're done, you just divide by C minus Z prime. Unfortunately here, with lattices you cannot do this. Right, so you can make it non-interactive via the phiatramir heuristic. Okay, so in lattices, you can't just divide by C minus Z prime. First of all, the inverse may not exist. We're working over some ring, not a field. But okay, let's not worry about that. You can make sure it exists. The bigger problem is the quotient Z minus Z prime divided by C minus Z prime is gonna be, there's no reason why it should have small coefficients. Z has small coefficients, C has small coefficients. The quotient will not. So that's the problem, right? Okay, so there's two solutions. One is say, okay, I don't care, I'm happy enough with this solution. I'm happy enough with extracting some F of something small equals, not quite T, but T times something small. Maybe I'm happy enough with that. Okay, another possibility is say, well, let's say I'm not happy with that. I actually want F of something equals, F of something small equals T. Well, then you say, okay, well, then choose your C and C prime to be zero or one. In this case, C minus C prime is gonna be one or minus one. And then you actually have the inverse of one and minus one is just one or minus one. And so you actually have F of something short equals T. But of course the problem with this latter one is your soundness is only one half and you don't wanna do that. I mean, you don't always wanna do that. Okay, so that, we thought it was bad. So basically, like I said, there's two ways to do sort of zero-knowledge proofs for lattices, right? So if you care about practice, when I say practical, let's say, we say less than 20 kilobytes per proof, per proof. Let's, some number, okay, that's the definition of practical. Let's define it from now on, that's, okay. So there's two things you can try to do. So this is the proof that you get, right? You knew a relation F of S equals T and what you can prove is that F of something that's not quite S, you know, maybe bigger than S, equals T times this difference of the challenges. So what if, so like I said on the previous slide, there's two choices you can have. You can make sure that the C hat is one and then your soundness is small or you say, this is good enough. For my application, I'm happy with this. So if you're happy with this, then you can get digital signatures. So this is how we get efficient digital signatures. And actually you can get zero-knowledge proof of commitments. But maybe some others, but you don't get a lot of other stuff. So actually very recently, this really nice work by first Baum et al and Kramer et al showed that, okay, you know, if C is zero one, you can't, you know, the soundness is bad unless you do amortization. So if you have many samples you wanna prove, they have a nice technique that allows for fairly efficient, according to this definition of efficient, proofs where you can prove simultaneously many relations. So they do this phiat-tremere with the boards, but they add some stuff to it to make sure amortized complexities is low. So that's quite good. So that I think actually, Yvonne is giving this talk in the next session in the other room. That's nice. And so you can do that. But the problem, you know, what we don't have is sort of this natural thing is I want soundness for one sample, right? And so we don't have this and my guess is we're not gonna have it ever. It just seems too hard. So there's also, I should mention, there's also these Stern type lattice zero-knowledge proofs. So these are their combinatorial and based on code-based, this the code-based Stern identification scheme, which is related to Shamir's purbuted kernels scheme. And it can be adapted to lattices, but it's horribly impractical. I mean, proofs are much bigger than one megabyte. And I think from most practical applications, unless this is done once in some protocol that's already very inefficient, then it's sort of, we shouldn't consider it relevant for practical applications. So I think the main open problems are, so if the domain of C is large, can we have more applications? Right now we can just build digital signatures and proofs of commitments. And in this direction, can we decrease the number of required samples? Because I think maybe even being kind of generous with the 10,000, I think you might need to prove a lot more than 10,000 samples before amortization kicks in. So I think these are two very interesting sort of research directions for lattice-based zero-knowledge proofs because it would be great if we could get something that's just as good as what we have in the discrete log world, but I really doubt that this is gonna happen. Okay, so now since, okay, verifiable encryption was in the title, I should sort of explain a little bit why it was in the title. So, okay, but I won't give the actual definition, but here's the scenario that we care about for some applications. So you have a sender and a receiver, and then you have some mediating authority. So this mediating authority does not ever wanna be bothered by anyone unless something really serious has happened. So what the mediating authority does is it publishes a public key and says, look, here's my public key, do not bother me unless something serious has happened. Do whatever you want with my public key. Okay, so then the sender and the receiver have some protocol that, so the sender has some secret witness, W, that X is in some language, okay? And what he wants to do is send to the receiver a proof of knowledge that W is a witness and also W is a witness, but the receiver is not quite sure that the sender's gonna be behaving properly. He might do something bad during the protocol. So what the receiver wants is for the sender to actually encrypt this witness under the mediating authority's public key and also give the proof of knowledge that C encrypts this witness. So basically if something bad happens, if the receiver decides that the sender did something bad during the protocol, he can tell the mediating authority, please reveal W for me, okay? So there are some applications for this, but this is sort of not the point of the talk. All right, so and I don't even care about sort of connecting this and statement. Let's just concentrate on, I give you a ciphertext and a proof of knowledge that this ciphertext encrypts some W, okay? So what this really is is just the proof of plaintext knowledge. So I have an encryption, I wanna prove to you that I know what you will decrypt to. Okay, so here's the Ringle W encryption scheme put in the form that's sort of amenable to the zero knowledge proofs. So the public key is usually some polynomial A and some AS plus E where S and E have small coefficients. The encryption of some, well, W, let's call that MW, is Pt is you just multiply AR plus E1 and P times TR plus E2 plus W, okay? So the encryption in terms of matrices looks like this, right, you can just stare a bit then you convince yourself that this matrix vector multiplication really does sort of exactly describe encryption. And to decrypt you just do V minus US mod Q might be, that's not particularly important. But so the point is if we wanna prove that I know the decryption, I just have to prove to you that I know that this consists of small elements. Approve, so you have this public key and the ciphertext, I wanna prove to you that this guy has the right randomness in messages and they're all small. Okay, so now what can we do with approximate proofs of knowledge, right? So we have this, I have this relation and I can prove, let's say I can prove to you that, well, I don't know a small vector, vector of small elements such that this times the small vector is the ciphertext, but what I do know is that I know that this matrix times some small vector is a perturbation of the ciphertext multiplied by some c, right? That's the c you get from the extraction of the Schnorr proof of the sigma protocol, right? So the implication now is like, well, you have this. The implication is that, if intuitive, is that it's not that you prove that uv is the right ciphertext, you prove that uv times some c which no one knows is the right ciphertext and this may not be particularly useful, right? Because the decryptor does not know c, so if he's gonna decrypt, he has to somehow, he actually cannot do it, right? Because he doesn't know the c which you only get from extraction which is never run in the real world. So this is just to summarize, if he decrypts, he may just get garbage because uv is not a valid ciphertext, right? He gets uv and a proof of knowledge that uv times some c is a valid ciphertext but that's not particularly useful for decrypting uv. All right, so here's sort of our solution to this problem, right? Which may not make too much sense like this. Okay, so the first step is to guess the c, okay? That's, okay, whatever. Then you decrypt this uvc and then you output the quotient w divided by c hat that you guessed, okay. So there's problems, right? Guessing the c, that seems unreasonable. There could be actually challenge space squared possibilities because this c hat is actually the difference of two challenges. So there's a lot of possibilities for that. Secondly, let's say you even guessed the right c. How can you be sure you guessed the right c? Your decryptor will give you something like, okay, was that the right answer, right? And you have to kind of, there's two and three are the same problem. Is this unique? I mean, decryption should give you a unique solution. So let me handle the second, the last two problems first. So actually what we do is we modify the parameters and the decryption algorithm of the Ring-A-Lible-E scheme so that the decryptor now guesses the c hat and then he can check something. He does some check. And if this check, whatever, the norm of this whatever equation that he computes is small enough, then he says, ah, that's correct. This is not garbage. And I'm going to output w divided by c. w is whatever I decrypted divided by the c hat that I checked. And actually what we can prove is that for any things that satisfy the above condition, this quotient will always be the same. So that in some sense is your plain text. So this is one thing. The somewhat interesting part that leads to weird things is this guessing c, right? The problem is this challenge space is too big. Okay, so we don't solve it super well but there is a solution that's good for many applications. Okay, so the first thing to notice is that if the ciphertext u and v is valid, so if the prover's completely honest, then for any c hat, in particular for c equals one, this will lead to a correct decryption. On the other hand, if the ciphertext is invalid, so the prover is trying to cheat, he did some ciphertext which will not be decrypted, but he is trying to cheat and he has a hope of succeeding. This means that there's going to be some subset of challenges that will really allow him to come up with a valid proof, right? So he cannot just give you a complete hope to prove a completely random ciphertext. He has to hope, he has to pick it in some way that for some challenges, he actually will be able to trick you, right? So basically the c hat is the difference of two challenges for which he is successful and the encryptor actually already knows one of these challenges, that's the one in the proof. So the decryptor, sorry, already knows one of the challenges, that's the one the prover gave in the proof, so now he has to just guess one. Okay, so this kind of lowers the challenge space from the possibility for c hat from challenge space squared to challenge space. Okay, still, you know, not great. Okay, so, but the sort of the result that the thing we can prove is that the decryption time will actually depend on the time it takes an adversary to fool us. So basically, if the adversary is allowed q queries to the random oracle, then if t is the number of times we have to guess c hat as the decryptor, then the probability that t is bigger than kq is less than one over k. So basically the probability that t is bigger than, you know, two qs, the probability that I will need to guess c more than two q times is less than one half. So what the implication of this is that the expected decryption time depends on the number of random oracle queries the adversary makes. So this could be very problematic. I mean, we don't want this, you know, this is not what we always strive for, right? Because if the adversary's much more powerful, this is bad, you know, the adversary may be able to make a lot of queries and the decryptor is not too powerful, right? But the thing is in many scenarios, the power of the adversary could be mitigated, right? So here's some ideas that we had for how you can limit the number of random oracle queries by the adversary. So the first thing is, what you can do is make the random oracle very, very slow on purpose. So the honest prover, if the prover is honest, he just needs to make one query. The verification only needs to make one query. Decryption doesn't need to make any queries. So the only entity that will ever need to more than one query is an adversarial one. So if you make the random oracle very slow, it will cost the adversary a lot of time to make these random oracle queries. So, you know, you can even make a random oracle query you know, five minutes slow, you know, you want, yeah, maybe not five minutes, but let's talk about seconds. Then, you know, this could actually mitigate the difference between the power of the adversary versus the prover. So another possibility is you have an interactive protocol or use public randomness beacons. So the verifier, so the idea is the verifier just, if this is an interactive protocol, you should just send a random solve to the prover. Basically, it's send a random oracle to the prover. So the prover cannot do any pre-computation beforehand. So really, if there's a protocol running, I tell you, use this random oracle, and then he kind of has to, if he's gonna cheat, he really has to do it online, this cheating. And if, you know, it's not interactive, then you can use a randomness beacon, you know, by NIST, for example, and you have to use the public randomness at the time you submit the proof, right? So that also restricts the time you can have. So this other suggestion, right, there's actually, my colleague Greg suggested, it's like, you know, what if we just impose large fines for cheating? And I'm like, oh, you know, at first I'm like, oh man, I lived in Switzerland too long. I mean, this is the solution to everything there. But you know, when you kind of think about it, it actually works, right? I mean, you know, no one is, people just don't do anything bad there because there's large fines. And in some sense, this makes sense because the fact that cheating occurred can be immediately detected, right? So unless cheating will, you know, do something so amazing for you, you're going to, you know, you're going to sort of weigh the possibility of sort of being detect, because you're detected immediately. You immediately detected someone's cheating because somebody tried to decrypt, and it didn't work. So I'm like, okay, now I have to guess the seas. And you know, and once I guess the seas, a large fine will be, you know, imposed on you, right? Even if I can decrypt successfully. And so this is definitely, in many scenarios, you know, unless, you know, it's a life and death situation, which, you know, most of the time it's not. This is a fairly good sort of way to make sure the adversary is not cheating. So another way you can do it, you know, if you really, maybe this really is a life and death situation, put a bound on the maximum number of guesses. If you want to put a bound on the maximum number of guesses, you can just make the challenge space smaller. But this increases the proof size. Now you have to do repetition, right? So if you want to do two, you know, the expected number of decryptions is still the same as before, but now there's an upper bound. Okay, you can do that. And you can also, a lot of these applications require CCA security, and you can easily adapt our thing because you just used no young approach. And since we already have, we already have one encryption and a proof, you just need, you can add a second encryption for, you know, just at very little cost. So it's not a bad thing because you already have is your knowledge proof. Okay, so I think the main, I mean, there's a few open problems, right? Still, you know, the main problem I listed before is still kind of open, right? What more can we do with this? How can we do zero knowledge proofs for lattice is better. But I think an interesting open problem is basically, is this lemma tight, right? So is it, can you really, is this the best you could say, right? Maybe you can say the problem, could you say something like probability that I need more than KQ queries is less than maybe one over K squared, right? So our proof is very black box. We only use the fact that there's a zero knowledge proof. But maybe you can use some algebra, because it's not clear to us what exactly the attack would be of an adversary. Or for any ring, could you come up with an attack that works really well that actually, well, the adversary could actually make this tight. So a non-black box approach, you know, I think you would have to use some algebraic properties of R and that could be actually really, really interesting, I think, if you could decrease this to something. I'm not sure you, I mean, it'd be great if you can get this to be negligible. But even getting something like one over K squared or maybe, you know, just put some constant in there, it would still be quite interesting. It would maybe lead to some nice algebraic results. So thank you. Thank you.