 Okay. The next talk is about malleable proof systems and applications and Sarah is giving the talk. So hi. I'm Sarah and I'm going to be talking about a primitive called malleable proof systems and some of their applications. And this is joint work with Melissa Chase, Markels Kulweis and Anna Lisanskaya. All right. So I'm going to start out with a bit of motivation. In fact, what I'm first going to do is motivate the opposite of what I'm talking about a little and that is non-malleable cryptography. Okay. So 20 years ago, starting with its introduction by Dolav Dworkin-Neyor, we really saw this strong emphasis on non-malleable cryptography. So kind of why is that? What is that? So let's imagine we have some user Rob and Bob wants to go talk to his bank. And Bob, let's say, has $100 with the bank, but he owes Alice some money. So he wants to transfer $10 to Alice. Okay. So one way we could think about doing this is Bob encrypts this message under the bank's public key so that only the bank can decrypt it. And he then starts to send that ciphertext over to the bank. So potentially, as that ciphertext is in transit, it might be the case that Alice jumps on the wire and sees this ciphertext going by. And Alice can say, well, okay, I don't actually know how to decrypt this ciphertext. I'm not the bank. But I do know that I have $0 right now. And so whatever Bob is giving me, I want 100 times more. So if this ciphertext is what we call malleable, then what Alice can do is she can maul this ciphertext using this multiply by 100 operation. And she can validly encrypt this message, transfer $1,000 to Alice. Okay? That's then what kind of the bank receives. The bank decrypts it, carries out this transfer, and, you know, Bob is left completely in the dark. Okay? So from a security standpoint, non-malleable cryptography says that that can't happen, essentially. Okay? So that no mauling of the ciphertext can take place. And therefore, that seems quite desirable from a security standpoint. Nevertheless, we've seen this real shift recently towards malleable cryptography. Okay? So one prominent example is the recent line of work on fully hemomorphic encryption initiated by Gentry. We've also seen examples looking at malleability as a feature for proofs. So by Belenki et al., Dodis et al., and Fuchsbauer. And we've also seen examples looking at malleability for signatures. So Bonnet Freeman and very recently on et al. So kind of if the security of non-malleable cryptography seems so great, why malleable? So one quick example here. Let's imagine we have some cloud storage provider. And we have some user Alice, okay? And Alice has just some huge chunk of data, some huge database. So rather than kind of keep that around on her local machine, what she can do is she can send it over to the cloud storage provider. And maybe she doesn't completely trust them, so she encrypts it first before she stores it. Okay? So nevertheless, Alice might someday wonder, you know, well, what's the average of all those MIs? Okay? If she authorizes the cloud storage provider to compute this averaging function over that data, again, without decrypting it, just computing the average over the ciphertext, then what it can do is it can send back this one ciphertext, right, encrypting the average value, and that saves Alice kind of this huge computational burden, right? Because all she has to do is decrypt. So this has applications in things like outsourcing computation, cloud storage, et cetera. Okay? So this kind of demonstrates that although non-malleable cryptography had all these desirable security features, malleable cryptography is really desirable from a functional standpoint. Okay? So our contributions are in this middle area that we call controlled malleability. And what this does is kind of compromise the security features of non-malleable cryptography with the functionality of malleable cryptography. So two examples of work in this area include the notion of HCCA security for encryption by Prabhakaran and Rosselek, and also the recent notion of targeted malleability by Bonaiseg Evan Waters, again, for encryption. So kind of relating these back to the examples. In the cloud example, if we somehow can guarantee that the only transformation that this cloud storage provider can compute on our data is this averaging function, then it somehow seems less risky to go store all our data with them. Similarly in that bank account example, right, if we know that by mauling the ciphertext, all you can ever do is decrease the amount in it, instead of increase it, then again we've kind of prevented the attack that we saw. So for our contributions in this work, we switch our attention to proofs, and we define the first notion of both uncontrolled and controlled malleability for proofs. We then give two applications. One is a strong notion of encryption security that we call CMCCA security, so this is similar to these previous two notions, except we'll see we obtain a really nice modular construction, and the other application is shrinking the size of verifiable shuffles to get what we call a compactly verifiable shuffle. And then just to make sure we can actually instantiate these proofs, we examine malleability within existing proof systems. So just an outline for the rest of the talk, I'll start off with some definitions. I'll then give our construction of what we call a controlled malleable NISIC, or CMNISIC. I'll then give those two applications I mentioned, and finally I'll just wrap things up. So diving in with the definitions, unfortunately for the sake of time, I do have to assume some familiarity at least with stuff like zero knowledge and proofs of knowledge. So I'll just start with malleability. So first, kind of to give an example of what I even mean when I say malleability for proofs. Let's imagine that I already have two proofs. So one is a proof pi one, that a given value b1 is a bit, so that means I have a commitment to b1, and I'm proving that it's either a zero or a one. And then I have a similar proof pi two that a different value b2 is also a bit. So logically, I know that b1 times b2 must also be a bit, because if I take any bit and I multiply it by any other bit, I'm going to get a bit. So it's not really additional information. So what I'd like is to be able to mull those existing proofs I have to get a new proof of that statement, so without having to go ask the prover for another proof. So more generally, we can say that a proof is malleable with respect to a given transformation t. If there exists some algorithm eval that, given the transformation, the set of previous statements and the set of previous proofs, can output a new proof pi for this transformed statement. All right? So relating this back to the example. The transformation in question is multiplication. The previous statements are b sub i is a bit. And the transformed statement that I want the proof for is that the product of the b sub i is a bit. All right. So one thing we just need to be a little careful about, since we're dealing with zero knowledge and witness indistinguishable proofs, is that we need to make sure the proofs are malleable only with respect to a certain class of transformations. OK? And we call these transformations ones under which the language is closed. So like I was saying with the bits, since I know that any two bits multiplied together yields a bit, we can say that the language of bits is closed under multiplication. So it's not closed under something like addition, right? One plus one is not a bit. So in particular, we can't really have proofs that are malleable with respect to addition, because then this mauling process would completely violate zero knowledge and witness indistinguishability. So that's the kind of basic definition of malleability for proofs. And that's how to reconcile it with zero knowledge. We also want to reconcile malleability with some strong notions of soundness, like a proof of knowledge. OK? So think about a proof of knowledge. It says that, given a proof, I can extract a witness. So it's not clear if we allow for these kind of mauling processes, so you form a proof, you know the witness, but then you give it to me and I can go transform it, but I don't know the witness. So kind of how do we reconcile those two? So what we do is we build off of some existing notions. So we first define an allowable set of transformations, T. These are going to be kind of the only transformations we want you to be able to use. We then look at a strong notion called simulation soundness. So this says that even with access to a simulation oracle that can prove false statements, an adversary still can't do that itself. We then look at this kind of natural extension called simulation sound extractability. This was introduced by Groth in 2006. And this says kind of one step further, well, not only can the adversary not provide these proofs of false statements, but in fact, from any proof the adversary does provide, we can extract a witness. All right? So now, incorporating malleability, we say that, well, if the adversary formed a proof fresh, then we can pull out a witness, but it's also possible that he kind of took one of those simulated proofs and mauled it himself, and in that case he wouldn't know a witness per se. Right? So at a high level, we're going to say that either an extractor can pull out a witness, or what it can pull out is a previously queried statement to the simulator, and then an allowable transformation from that statement to the new one that the adversary is claiming a proof for. Okay? So a bit more formally, we have a game. So in the middle here we have an adversary, and then on the left we have a simulator and an extractor. So first step, the simulator and the extractor are going to jointly generate some parameters. So there's the CRS, sigma, the simulation trapdoor tau sub s, and the extraction trapdoor tau sub e. The adversary then gets to see the CRS and the extraction trapdoor, and now it can start issuing its simulation queries. So what it does is queries the simulator on a statement x sub i, and the simulator responds with some proof pi sub i. Okay? And this database queue here is just so the simulator can keep track of the queries. So at some point after however many interactions, the adversary is going to output its own statement proof pair x and pi, and the extractor is going to pull out this tuple, w, x, prime, and t. Okay? And then the winning conditions really correspond to this intuition I gave. So if we look at this third case, this is just the extractor completely failed to extract anything. That's obviously very bad. So we say the adversary wins if it can do that. Similarly, this first case is really just regular, similar sound extractability. So this says we pulled out a witness, but it turned out to not be a valid witness. So the adversary kind of proved this statement on his own. And this second case is where we really kind of incorporate malleability. So here we say that the extractor pulled out a previously queried statement and an allowable transformation, but actually either the statement wasn't previously queried, it's not in queue. Or the transformation is not actually allowable. Or the new statement wasn't actually derived from the old statement using that transformation, right? So x is not t of x prime. And then what we say is that the proof is cmsse, short for controlled malleable simulation sound extractable, if any adversary A has the most negligible probability in winning this game. And to kind of reduce acronyms for the rest of the talk, if a proof is zero-knowledge, this notion, cmsse, and something we call strongly derivation private, which essentially means that this evaluation algorithm hides the transformation, then we call it a CMNISIC or controlled malleable NISIC. Okay, so I know this definition might be a little to take in all at once. So don't worry, we're going to see it again in one second when I gave the construction of the CMNISIC. Okay, so I'm going to give a generic construction first. And this is going to be by combining malleable, witness indistinguishable proofs of knowledge with unforgeable signatures. So the construction is basically to give a CMNISIC for an instance x and a witness w. What we're going to do is we're going to prove knowledge of the tuple wx prime t and sigma, such that either w is a valid witness for x or sigma is a valid signature on x prime, x was derived from x prime by transformation, and the transformation that was used was allowable. So hopefully this sounds a little familiar at least from the definition. Just to make sure it satisfies the definition, let's go through it. So we have our adversary and we have our simulator. And the trapdoor that we're going to use here is going to be the signing key for the signature scheme. So in the adversary queries on a statement x sub i, what the simulator is going to do is it's going to sign that x sub i. Okay, it's then going to say, so my previous statement was x sub i and I'm deriving x sub i from x sub i using the identity transformation. Okay, so just looking at these conditions we can see that if the identity transformation is allowable, then all of the conditions are satisfied and this is in fact a perfectly valid witness. So now when the adversary outputs its own statement proof pair, we're going to use as the extractor the extractor for the underlying proof, right? And because it's a proof of knowledge, we know this extractor really does exist. So for the game, we want it to pull out this tuple. What it's actually going to pull out, of course, is this tuple with the addition of that sigma, and now we can just make sure that none of the winning conditions can be met. So starting out with the first case, the w we pull out is the meaningful part, but it's not actually valid. Well, we can see that that just violates this underlying condition and thus violates the existence of this extractor that works as it's supposed to. Similarly with a lot of the other cases, if x wasn't properly derived from x prime, then we violate the underlying extractability, if t isn't allowable, we violate the underlying extractability. And of course, if the extractor just fails completely, we again violate the underlying extractability. So the only case that's really left is when the statement wasn't actually previously queried. So now we incorporate the signature. There's two cases here, either the signature doesn't verify, in which case we've just once again violated one of the underlying conditions, or the signature does verify. And then when we violate is the unforgeability of the signature scheme, right? Because the simulator never saw x prime, it's not in its database queue. But nevertheless, the adversary somehow produced a signature on that x prime. All right, so that's kind of how it satisfies CMSSC. And in the paper, you can also see how it satisfies zero knowledge in this strong derivation privacy. So it really is a CMNICIC. In terms of instantiating this efficiently, for the witness indistinguishable proofs, we're gonna use growths of high proofs. That means for the structure preserving, sorry, for the signature, we need to use something called a structure preserving signature, so that we can integrate with growths of high proofs. In fact, if we use an instantiation of structure preserving signatures due to chase and cool vice, then we can actually instantiate our entire CMNICIC based solely on decision linear. In terms of efficiency, the efficiency of the scheme obviously hinges on the efficiency of the primitives. So if we use the chase cool vice structure preserving signatures, maybe not so efficient, hundreds of group elements. If we're willing to make a stronger assumption, then we can get that down by an order of magnitude. And of course, it depends on the representation of the transformation as well, which completely depends on the transformation you're interested in. So one other word on the transformations. We need the class of transformations to contain the identity, as we just saw that's for the simulation strategy. And we also need it to be closed under composition. And that's for compactness, right? So that if I go mall a proof and then I give it to you and you go mall it and we pass it around, we don't want the proof to be growing. And so that's why we have this requirement. Finally, there's kind of one thing that might seem to be missing here, which is of course malleability. So in the paper we examine in great detail the many ways in which GS proofs are malleable, but I won't have time in the talk. So, zipping right along to the applications. Again, I'll give the two that I mentioned earlier. So the first is this notion, strong notion of encryption security called CMCCA security. And this is heavily inspired by that previous notion of each CCA security by Prabhakar and Rasulak, and also related to the notion of targeted malleability by Bonaise Geven Waters. So briefly, what's our security game look like? We have our adversary. And on the left side, the adversary might be in this real world, so he gets a normal public key. And then he can just interact at will with encryption and decryption oracles. So we get rid of a challenge cipher text completely. And then on the right side, he might be in the simulated world. So here he gets some simulated public key. And then he gets access to these slightly strange looking oracles. Essentially what's happening is the E oracle, since the CMCCA algorithm doesn't even see the message. This oracle is essentially just returning encryptions of garbage. Okay, but what's important is that those garbage encryptions can be sort of tracked. So this same X oracle, given a cipher text, can attempt to pull out a previously queried cipher text and a transformation from that cipher text to the new one. So hopefully that sounds kind of similar to this notion in CMSSE. And so even though C, sorry, even though C prime and therefore C might contain encryptions of complete garbage, we know what they're supposed to contain because of this database queue. And so we can kind of return the appropriately transformed message. So that was very hand wavy, but I'll also mention, and of course we don't want the adversary to know which world he's in. So as I said, we give this nice modular generic construction for achieving encryption that satisfies this notion. In fact, it's really simple, I can say it right here. We essentially just to encrypt a message, we encrypt it using an inCPA secure scheme. And then we prove knowledge of the value inside that cipher text using a CMNISIC. And if you're more interested, you can definitely see the paper. So moving on to our final application, Shuffle. So we already saw plenty on this in the previous talk. So hopefully these slides will just be a brief reminder. So what is a shuffle? Well, we can imagine it in the context of a voting system. So voters, as they go to place their votes, can encrypt these votes under some given public key. They can then kind of go post that cipher text on a bulletin board somewhere. And this gives a public set of cipher texts. So what next happens is these are fed through some series of mixed servers. So these mixed servers are responsible for shuffling the cipher text, which means permuting and re-randomizing them. So what happens is they're fed in and they're fed back out in this completely random order, completely re-randomized as well. So this has gone through a number of mixed servers. And the final outcome is this set of cipher texts. So again, because each mixed server permutes and re-randomizes the cipher texts, this final set is gonna be in a completely random order. And furthermore, they're gonna be completely independent of these input cipher texts because they've been re-randomized. So in particular, performing this threshold decryption that we saw won't reveal whose vote is whose, and so we can just decrypt and tally up the election, okay? So again, one problem that we saw in the last talk as well, was well how do we actually even know these mixed servers are behaving honestly, right? So we want them to be shuffling the cipher texts, but we can't really guarantee that they are. So of course we can guarantee that they are if we ask them in addition to outputting their cipher texts to also output a proof, right? So this proof pi is like a zero-knowledge proof that says, I really did the shuffle honestly. So we can ask each mixed server to output such a proof, and then we can say that the shuffle is what we call verifiable, right? Because any user kind of sitting at home can go check all these proofs and make sure that the election really did take place honestly, all right? So one new problem is, well, there are K-proofs up here. So the size of the overall proofs that the user has to check is seemingly necessarily growing with the number of mixed servers, all right? So as you may have guessed at this point in the talk, the way we're gonna solve this problem is using malleability, all right? So now what we're gonna do is have the initial mixed server output a proof pi, just a fresh proof pi, okay? But in addition to passing along now the cipher texts, we're also gonna ask it to pass along its proof to the next mixed server, okay? And these subsequent mixed servers are going to have their own transformations. So this is the permutation, the re-renomization, and also this kind of stamp of participation of the public key. And what they're gonna do is they're gonna maul the proof using this eval algorithm, and kind of fold in their permutation and re-renomization to the one from the mixed server before it. Okay, so this essentially just gets passed down the line. And we can now say that this shuffle is compactly verifiable, because this last proof pi, pi K, is gonna suffice to verify the correctness of the entire multi-step shuffle, all right? So if there are n cipher texts, let's say, and K of these mixed servers, then the solution we had before with the verifiable shuffles seems to require this overall proof size of O of n times K, right? Because we needed a separate proof for each mixed server. But now with just the one proof, we can get a proof size, an overall proof size of O of n plus K, okay? And I also want to mention, this isn't just a theoretical bound. So in this paper, we were constrained by our representation of the permutation as a matrix. So we ended up with a proof of size O of n squared plus K. But in a very recent result, we used new methods for the CMNICIC construction, and thus new methods for the representation of the permutation to achieve actual O of n plus K. Okay, so that's about all I have to say. Just to quickly wrap things up, what have we done? So we defined these notions of malleability for proofs. We saw this in sort of a few different flavors, namely uncontrolled and controlled. We also saw these two applications, CMCCA security and compact shuffles, and not so much in the talk. But in the paper, we saw that Grotze high proofs have these meaningful malleability properties. And we also did a lot more. So if you're interested, please check out the full version online. And thank you very much for listening, and I can take some quick questions. We have time for questions. Okay. If I go back to your early example where Alice had some values stored in the cloud, and then she was able to ask for the mean of them, if she later decides she wants some other functions, so she wants the product or she wants the standard deviation or something, can she then ask for that and authorize that as a new allowed transformation? Or is the set of allowed transformations fixed for all time? Yeah, it's sort of fixed when the parameters are generated. But that's a great idea. Maybe that's an open question then, how to authorize new transformations. More questions? OK, let's thank Sarah.