 The last talk of the session is Cryptography with Temperable and Leakie Memory by El-Tauman-Kalai, Bavana Kanakursi and Amit Sakhai, and Bavana will give the talk. Thanks, Evgeny. So you've had enough time to stare at the title slide, so I'm not going to repeat the names again. But before I start, a big thanks to all the speakers of the session for setting up so many of the concepts that I'm going to use. I'm just going to jump right into the talk. So as we've seen in these sessions, motivated by powerful physical attacks, cryptographers have done a lot of research in building primitives that are resilient to leakage. So a lot of great research has gone on to this. And in particular, by now we have some amazing results in this area. We know how to build cryptographic schemes that are secure against continued leakage. Basically the adversary gets leakage about the secret key over and over again. The amount of leakers that she gets could be unbounded. It's not bounded in terms of the length of the secret key. And this was shown to be true in two recent papers that appeared last Fox, the first by Dodis Harlembe, Lopez-Alton-Wicks, and the second by Brackersky-Kalai-Katz and Vaikunthanathan. So great results. But physical attacks are not just restricted to leakage attacks. They could also tamper with the memory. And this is something that needs to be addressed. And as we've seen, there's a lot of work which shows that this could potentially be even more dangerous than just dealing with leakage. Okay, so that's going to be our focus of the talk. So let's see what's been done in the area of tamper-resilient cryptography. So surprisingly in this area, there's been fewer results. The work of Janaro, Lissanskaya, Malkin, Mikali and Rabin, they achieve very strong guarantees. They show you how you can actually deal with arbitrary tampering functions. But they rely on some user-modifiable memory, user-specific memory, that's actually non-tamperable. So think of it as the user's public key being stored on non-tamperable memory. Then there's the work of Ishai Prabhakaran, Sahay and Wagner, where they consider a much stronger model, where the adversary tampers not just with the memory, but with all parts of the computation. The disadvantage here is that they consider only tampering functions that are restricted, that set or reset bits. Then there's work in the area of related key attacks, for example, by Belare and Kono and Applebaum, Harini and Ishai, and also work in the area of non-malable codes by Dumbosky, Pyrschek and Wix. But these works, again, while they're related, they consider only restricted tampering functions. And that's the main challenge, that's what we'd like to address. In particular, our goals are as follows. We want to build leakage and tamper resilient schemes, so we want to handle both leakage and tampering, that always satisfy the following conditions. Any memory that's user-modifiable needs to be tamperable. We need to aloft the adversary to tamper with it, if the memory contains any information that's specific to the user. For instance, the user's public key and the private key, they have to be stored on the user-modifiable memory because it's specific to each user. So that's something we want to aloft. Also, we don't want to consider restricted tampering and leakage functions. We want to aloft the adversary to do any arbitrary tampering, any arbitrary leakage. And the good news is that we actually achieve this. But the flip side is that this is small caveat. We have to assume non-tamperable public parameters. So think of this as the CRS. But it would have been awesome to actually get rid of this assumption, but unfortunately we're not able to, but not all is lost because the important point is that these public parameters do not depend on the user's information, user's secret keys in any which way. So you could think of the manufacturer actually hard-viring these into the circuit itself, into the user's device itself, so they don't actually consider, you know, contain any information that's specific to the user's keys. Okay, so they're completely independent. We also rely on a source of true randomness which again seems to be inherently necessary. Okay. So this brings us to our results. In our first result, we present a general transformation that converts any scheme that's resilient to bounded leakage into one that's also resilient to continual tampering. So what do I mean by bounded leakage? You already heard it in the previous talk, but let me just give you a quick overview. So in the bounded leakage model, the adversary gets a fixed amount of leakage about the secret key. For instance, 90% of the secret key could be leaked and that's it. That's the only leakage the adversary's allowed. So we want to take a scheme that's actually secure in this model and convert it into a scheme that's resilient to continual tampering. Basically, the adversary will tamper with the secret key, how many of our times she pleases and it should still be secure. And that's what we get. Our transformation is based on general primitives but the specific instantiation that we have uses fully homomorphic encryptions and NIZ case. In our second result, we construct encryption and signature schemes that are resilient to continual leakage and tampering and this is based on linear assumptions over bilinear groups. So the second result is what we view as our main result and most of our paper has gone towards proving the second result but due to time constraints, most of the stock will go towards showing you result one. Okay, so that's what we're going to focus on. So here's the model. We want to build a signature scheme in the continual tampering model. So the challenger is going to pick his keys and send the public key across to the adversary. The adversary gets to specify arbitrary tampering functions. The only constraint is that it has to be poly time but other than that, no other constraints on the tampering function. Okay, the challenger, you know, deletes the old key and replaces it by T of SK and then the adversary gets to see signatures on the stampered secret key. Okay, and this process continues and finally to succeed, the adversary's goal is to succeed in a regular forgery attack. She needs to come up with a forgery that will verify with respect to the original verification, original public key. Okay, so that's the game. It's easy to see that this is actually impossible to achieve. Why is that? The adversary could just tamper with the secret key bit by bit and then simply use her signature queries to learn the entire secret key. So what's the adversary going to do? She's going to set the first bit to zero, sign a random message, see if the signature scheme is signing correctly. If it's not, she knows that the first bit is one. So she's going to do this bit by bit and learn the entire secret key. So because of this, we need to assume that the circuit self-destructs. Okay, so there's something that's going to trigger the circuit to self-destruct and it blows up. But what I mean is that the memory gets erased. And this has already been used in prior works on tamper proof crypto. Okay, and I'll tell you about the conditions under which it's going to sort of self-destruct in the next slide. Okay, but this is the model that we consider. Okay, so before we build this, I need to introduce non-interactive zero-knowledge proofs of knowledge. Most of you probably are already familiar with this, but just to make sure we're all on the same page. NIZK proofs have a prover and a verifier that share a CRS. And the prover's goal is to prove that some statement X is in a language L. Okay, that means that the prover knows the witness W. And because it's non-interactive, she's going to send exactly one message computed that she computes using the statement, the witness and the CRS. And the verifier is going to be able to use that to verify. So we need all the standard guarantees. Like it has to be zero-knowledge. Basically, it shouldn't tell you anything about the witness. You need it to be correct and so on. But in addition, we also need a few additional properties. We want our proof system to be simulation sound. What does this mean? Simulation soundness means that even if an adversary gets to see false statements, proofs of false statements, the adversary can still not compute a proof for a false statement, right? So that's simulation soundness. And a proof of knowledge essentially says that if the adversary actually comes up with a valid proof, then the simulator can extract a witness out of it. And finally, we'll also need our proof to be short, meaning that the length of the proof should depend only on the length of the witness. It'll depend polynomially on the length of the witness. And we'll see why, where all these are used in a bit. Okay, so this is our transformation. What do we start with? We have a scheme F that's resilient to leakage. When I say leakage, I mean bounded leakage for the next few slides. And what would have been nice is to get a completely general transformation, but it's not fully general. In that, we assume a specific property about the generation algorithm for the signature scheme. So the secret key is going to be chosen uniformly at random from some space. And the public key is efficiently generated, computable given the secret key. This property already holds for several bounded leakage resilient schemes. Okay, and we're gonna use S to build S prime, which is going to be tamper resilient. So the main crucial difference between S and S prime, the change that we're going to make to S is in the generation algorithm. So where before we had the secret key to be chosen uniformly at random, now it's going to be the output of a PRG. Okay, so SK is going to be PRG of R. And in addition, it's going to have one more component, namely a Nizik proof. So the entire secret key is going to consist of the output of the PRG and a Nizik proof, where the proof is of the pseudo randomness of SK. Okay, and as I said before, we'll need it to be short, simulation sound, a proof of knowledge, and so on. Okay. And the signing algorithm is similar to the signing algorithm of the leakage resilient scheme, except that before it issues signatures, it's first going to verify that the secret key is in a valid state. And if it's not valid, it's going to self destruct. Also, there are some public parameters. The Nizik, I haven't told you what the public key is, but that should be clear on the next slide. The Nizik CRS becomes a part of the public parameters. Okay, so this is our transformation. And this is the informal theorem that we get. If the original scheme that you start off with can tolerate at least size of R plus size of pi bits of leakage, then the scheme that we built out of it is going to be resilient to continual tampering, where R here is the seed of the PRG and pi is the Nizik proof of pseudo randomness. Okay, why is this secure? Okay, so what we're going to show is that it's a standard reduction. So if there is an adversary that's going to, a tampering adversary that can tamper and leak, then we'll build a leakage adversary that can leak and then break the security of the leakage resilient scheme. So we have the leakage challenger C, the leakage adversary B, and he's going to use the tampering adversary A to help him out. So the leakage challenger is going to pick SK and PK and generate his secret key and public key, send the public key across. But it's important to note here that the secret key now is no longer the output of a PRG. He's going to generate it uniformly at random because that's the secret key for the scheme that we're trying to break. The leakage adversary is going to generate on his own a CRS with the trapdoor for the Nizik proof that he's going to come up with. And he's going to send across PK and CRS to the tampering adversary because that's what he expects to see. Okay, so now the tampering adversary is going to ask two types of queries. He's going to ask for tampering queries. He's going to make signature queries. We'll put the tampering queries on hold and see how to handle signature queries. So let's assume nothing has been tampered with so far. So he's going to send across a query saying, sign this message. And it's easy to see that if no tampering has taken place, then both SIG and SIGPRIME, where SIG is the original leakage resilient signature scheme and SIGPRIME is the tampered resilient signature scheme that actually equivalent, right? So this guy's life is easy, B's life is easy. He's just going to forward the query, get back the signature and actually smart and just send it back. So he doesn't need to put in any effort. Okay, now what happens if he issues a tampering query? The only thing the leakage adversary can do, the only thing that access that he has is access to a leakage article. And that's what he's going to use. So A expects the tampering query T to be applied on the keys in the memory, right? But as far as he's concerned, the key consists of a secret key and an NIZK-proofed pi and of course the public key. So, and B is going to ask for a leakage query, but now this leakage query is going to be applied on SK and not on SK, pi. So B needs to help out C and somehow help him get pi. So what B is going to do is to build a leakage circuit by hardwiring the CRS and the trapdoor and the leakage circuit is going to do as follows. First, he's going to compute a proof pi, a simulated proof of the statement SK is pseudo random, right? This is a false statement because SK is chosen to be uniform randomness, but because we have simulation sounders, we can actually give a simulated proof and we're still in good shape. And he's going to come up with a proof and then now he has everything that he can apply the tampering function on. So he's going to apply the tampering function on SK, on pi and on PK and he's going to get back SK star, pi star and PK star. And now if the proof is valid, which it has to be because otherwise the circuit is going to blow up, the memory is going to be erased. If the proof is valid, then we know that the adversary could not have come up with a false proof. So when A sent a tampering query, it had to be that SK star is actually the output of a PRG because A doesn't know how to actually come up with a false proof. And we know because of this proof of knowledge property that we can actually extract R star in this situation. We can extract the C to the PRG. So essentially the leakage algorithm is going to extract R, R star from SK star, pi star and using the CRS in the chapter. And he's going to send R star and pi star, basically the C to the PRG and the proof for the NICK. And now the thing to observe is that once B gets this, he's got everything that he needs to simulate rest of A's queries because he can use R star and pi star to get the current state of the secret key which is SK star and pi star. So he gets the current state of the secret key entirely. And now she can just simulate the rest of A's views completely on her own. And it's because of this leakage that we give, leakage of the seed and the NICK proof that we actually need our original leakage resilient scheme to tolerate at least that many bits of leakage. Our scheme can also be extended to quite easily actually. It can be shown to be leakage resilient as well. It tolerates a certain amount of leakage. But since that's not the main focus, I'm not going to talk about that. Okay, but this scheme still satisfies only, is still secure only in the bounded leakage model. The adversary still gets only a bounded amount of leakage about the secret key. But the main question that we want to ask is, how do you actually secure against tampering and continual memory leakage? So that's our second result. I'm not going to have time to go into the details, but let me just try to give you a flavor of the result. Okay, so in the continual tampering and the continual memory leakage model, the adverse, the challenger is going to pick a secret key and a public key. And the adversary is going to get, you know, adaptively she can specify her leakage functions and get a bounded amount of leakage in any time period. And then she gets to specify a tampering function just as before. And she gets signatures. Okay, and at the end of the time period, the challenger is going to run a refresh procedure. He's going to run an update procedure. And the idea of the update process is to make sure that, even though the adversary has learned information about the secret key in the previous time period, when you refresh, it's going to be useless for the adversary, okay? So the update process is crucial. And in our case, it's also important to note that the update process is actually being applied not on the original secret key, but on the tampered secret key. So the adversary gets to specify T completely on her own. It could be an arbitrary, any arbitrary function. And still we somehow want to guarantee security of this update process. Okay, and once the update is done, there's going to be more leakage signing and tampering queries. And yeah, and I'd like to specify that this is the cartonal memory leakage model because even though in one time period, we're actually allowing the adversary only to get a bound amount of leakage in the entire lifetime of the secret key. Once all the updates are done, it's not restricted depending on the length of the secret key. So she can get unbound amount of leakage. And finally, to succeed, she's going to output a forgery. Now, the starting point for our work was the continual memory leakage scheme of BKKV. So BKKV essentially have a scheme that's secured in this model, except that they don't consider the tampering function. So there's no tampering in their case. And our hope was to actually show that the BKKV scheme itself would be secure in the model where tampering is also included. But the main challenge there was the updates were no longer secure because the adversary could choose the tampering function on his own. Since he had full control, there was no way, we couldn't get their scheme and show that it was actually secure to do updates even with those tampered secret keys. So that was the main challenge. So we needed a different way of doing updates. And this is a scheme, it doesn't tell you much. But then our next step was to see if the proof of BKKV itself, we could use a similar proof technique to actually show that our scheme was secure. But that also didn't work. But for those of you who are familiar with BKKV, they have an algebraic lemma that I'm not going to tell you about. But their algebraic lemma doesn't actually apply in our case. So we had to come up with a new algebraic lemma and that's the main part of our result. So hopefully this motivates you to see the paper for more details. To conclude, I gave you details of a generic transformation that converts bounded leakage resilient scheme to tamper resilient. And also a high level overview of number 30 construction of a scheme resilient to continual memory leakage and tampering. Thank you. Any questions to Bawana? Let's thank our speakers for the session. Thank you.