 The first talk is going to be on leakage-resilience zero knowledge by Sanjam Garg, Abhishek Jain, and Amit Sakhai. And Sanjam is going to give a talk. Hi. Thanks, Shafgani, for the introduction. Today I'm going to talk about leakage-resilience zero knowledge. This is joint work with Abhishek Jain and Amit Sakhai at UCLA. So traditionally, when we talk of cryptography, we allow the Adverse 3 access to devices in a well-specified input-output behavior. So it can send behavior according to this well-specified interaction. But as it turns out, this is not always the case. An adversary can potentially, in certain cases, obtain additional information about the secret that is possessed by a cryptographic system, and which could ultimately lead to the total collapse of the security of the system. And this is what is dealt in the area of leakage-resilience cryptography, where we allow the Adverse 3 some kind of additional access to the cryptographic system. And it can obtain additional information about the secrets that are held in this device. So a lot of prior work has happened in this exciting area of leakage-resilience cryptography. But it has focused towards leakage-resilient primitives and to temper resilient and leakage-resilient circuits. However, on the side of leakage-resilient interactive protocols, the work has been severely limited, especially in terms of the kind of leakage that is permitted in these protocol settings. In this work, we focus in the setting where we allow leakage on the entire state of the honest parties. During the protocol execution, it can be arbitrary kinds of leakage. Any time, there is no restriction. So that is the focus of this work, and this is how it departs from previous work. So when we talk of protocols, zero-knowledge is a very fundamental notion in interactive protocol, and it makes sense to study leakage-resilient in the context of zero-knowledge-proof protocols. So let me give you a brief overview of the setting. So in a zero-knowledge-proof system, you have a prover and a verifier. And the prover is trying to convince the verifier about the validity of the statement X. You can think X as a graph, and the prover is trying to convince the verifier, let's say that this graph is Hamiltonian. And we require in the setting the security against the cheating verifier, shown by red hair. We require that this cheating verifier should not learn anything beyond the validity of X. So in particular, in the example I gave, it should not learn anything more than the fact that the graph X is Hamiltonian. This is formalized by saying that for every cheating verifier, there exists the simulator S that simulates the view of this cheating verifier. OK, so let's, moving on to the setting of leakage, we want to allow this cheating verifier, in addition to the ability to interact with the prover in this protocol, with the ability to obtain certain leakage information from the prover. And how he does that is he's allowed to send leakage queries any time during the protocol. So you can think of this as a function or a circuit that the verifier sends to the prover. And the prover evaluates this function or circuit on its entire state. In particular, the input, that's the witness corresponding to the statement, and all the random coins that have been used in the protocol so far. So he evaluates this function or circuit and sends it back to the verifier. So I've shown it by just one leakage query, but you can think of arbitrary number of leakage queries which are adaptive and can happen anytime during the protocol execution. So in this setting, if you wanted to guarantee that the verifier cannot learn anything beyond validity of the statement x, it seems unreasonable because if you just thought of the, you can think of f as an identity function. And in this case, this function will leak the entire witness. And so we cannot hope of achieving the standard notion of zero knowledge. But before we move on to what can be achieved or what we hope to achieve, the question to, there are some models that have been considered that I want to briefly touch on and say a few words. So one possible model you could consider is only computation leaks in information model. In this, information is only leaked when some kind of computation is performed. But as it turns out, this model is often problematic in certain application scenarios as has been shown earlier. And so we don't want to consider this weakening, but I also want to stress that even if you were to assume this model, we have an impossibility result that you cannot achieve standard notion of zero knowledge. Another notion could be to have some kind of a pre-processing phase where the prover can do some leak-free pre-processing and the point here to note is again, this limits applicability and we don't want to limit ourselves to by having a leak-free phase before the protocol, anytime during the protocol or after the protocol. And again, just like in the computation, normally leaks information model, if you were to only assume leak-free pre-processing, again, we can argue that it would be impossible to achieve the standard notion of zero knowledge. So again, before I say actual definition, let me summarize what we want. We want to have a setting where we can leak in the entire state of the prover anytime during the protocol. We don't want any leak-free phases and we want a meaningful notion that is useful in applications scenarios. As I mentioned earlier, we cannot achieve the standard zero knowledge guarantee just because the simulator has no way of simulating queries which are directly related to the witness. So we will have to, in some sense, relax the definition. So the goal is to be able to simulate these leakage queries. And to help the simulator achieve this, we're going to allow the simulator access to a witness oracle. So witness oracle, the simulator has access to this oracle that is in possession of this witness and the simulator can obtain responses to queries from this witness oracle. And we are hoping that simulator is going to be able to use this witness oracle in being able to simulate these leakage queries. So we have a kind of a real ideal paradigm where the ideal world is also leaky. That is in particular, as I said, it can obtain leakage about the witness. Now, of course, this function F, which the simulator queries the witness oracle, could be the identity function, and the simulator could leak the entire witness, in which case it can trivially simulate the protocol, and wouldn't achieve what we desire. That is to help the simulator. We want to help use this witness oracle only in simulating these queries. So we want to restrict the simulator's ability to use this witness oracle. And how are we going to do it? We're going to limit how much leakage he can obtain from this witness oracle in the ideal world. So in the ideal world, we want to consider limiting this leakage. And of course, it has to be in correspondence with the amount of leakage that happens in the real world. So it should be in some sense proportional. So for that, you can consider any function. We consider a simple linear function where lambda is the leakage parameter, and the total leakage in the ideal world is bounded by lambda times the total leakage that happens in the real world. So you can see if this parameter is close to one, then we can say that the verifier in the real world learns nothing beyond the validity of the statement X and the leak information. So he does learn something more, but the protocol itself and the leakage are not conveying something more than what the leakage alone is conveying. So this idea of having a leaky ideal world notion wherein you have a witness oracle in the ideal world is not completely new, and goes to the idea of knowledge complexity introduced by Goldreich and Petrank in 1991. And the crucial difference, however, in our setting is that in their setting, the protocol inherently leaked information. The idea of the witness oracle was to help in the simulation of the protocol that was inherently leaking information. In our case, the leakage is because of side channel attacks. Our protocol doesn't leak any information. It is to simulate the side channel leakage in the side channel attacks that we need the ideal oracle. If there is no leakage happening in the protocol, then we would not need any ideal oracle in the simulation. So at this point, I also want to stress a point of leakage oblivious simulation. So I mentioned briefly that we are trying to restrict the simulator's ability in using the witness oracle and so that it uses it only in simulating the leakage queries. And we want to restrict it even further in the sense that we don't want the simulator to actually see the answers of the leakage queries, but just to query and send the responses back. So for every leakage query that happens in the real world, he'll massage that query and send it to the witness oracle, obtain the response and that is sent back directly to the cheating verifier. The simulator does not get to look at the answer. So this kind of stronger properties needed for some applications. I'm not going to talk a lot about it. You can look into the paper for that. So now getting to our results. The main result in our paper is a leakage resilient zero-knowledge interactive proof system based on general assumptions where the leakage parameter lambda is one plus epsilon for any constant, small constant epsilon that you want. And in fact, we show that for any, that this parameter is optimal in the sense that you cannot do anything better than one. So to the best of our knowledge, this is the first positive result on handling arbitrary leakage during protocol execution. The second result that we have is on leakage resilient non-interactive zero-knowledge proofs. This is also under standard assumptions. Finally, I want to briefly point to you about the exciting concurrent work by Pzonski, Kennedy, and Halevi. We also have some applications of our results. The first application is to universal composable secure multi-party computation. So in the universal composable setting, a composable multi-party computation setting, we know that we need a trusted setup to achieve this kind of composability guarantees. And the known results are known, we know results based on temporary proof hardware, but in all these works, the assumption is that the tokens that are used are completely temporary resilient. In this work, we relax this assumption and say that even if you could leak certain information from the tokens, you could still achieve UC. The second result that we have is on fully leakage resilient signatures in boundary leakage and continental leakage model. This result is not new. This was only recently presented in several papers. However, the key point that we have new is that our scheme is also secure in the noisy leakage model with the earlier schemes were not. So let me get back to the technical part and touch on the core technical part that we have is the one plus epsilon leak is really in zero knowledge proof system. So as I mentioned earlier, the goal is to be able to simulate these leakage queries that happened during the interaction. And for that we have the witness are equal to help the simulator in achieving that. And then in doing so, the simulator must be consistent with the past actions. So what I mean by that is, the leakage might happen at a point after certain steps in the protocol have happened. And then the responses of these leakage queries must be consistent with the past actions. Furthermore, this leakage and then the protocol itself should not reveal to the verifier that simulator is cheating or fooling the verifier in the interaction. So in particular, you can think of this function to be the identity function. I love the identity function. And in this case, it would in particular leak the input that's the witness for the statement and all the random coins. And this might sound something like just corrupting the prover at this point. And this is actually the case because if you're given the entire state to the cheating verifier, then that means that given the input and the random coins, the together should explain the actions of the simulator just as an honest prover strategy. So what I mean by that is, given this input and the random coins, if the prover followed the honest prover strategy, then it should generate the exact same messages as were generated in the protocol. This sounds very similar to adaptive security. And then the key question would be, can we use adaptive security to achieve leakage resilience? To say a little bit on that, I would recall a little bit about adaptive security. So in adaptive security setting, Adversi can corrupt any party during the protocol execution as it wishes. And whenever a party P is corrupted, the Adversi learns the entire state, the input and the random coins of the prover. And given now the job of the simulator who is simulating this honest party P in the interaction is that whenever this party P gets corrupted, it must generate, produce the input and the associated random coins. It gets the input at that point as input. It must generate random coins such that it's consistent with the transcript. Just like the previous setting, given the input and the random coins and following the honest prover strategy, the same messages that were sent in the protocol must have been sent generated in the transcript. So this can be achieved using standard technique or the equivocal commitments. I'll just get to equivocal commitments in a bit. So the question is, can adaptive security be used to get leakage resilience? So let's consider a simple example and try to see from there how it goes. So let's consider the simple example of graph Hamiltonian city where both the prover and the verifier have this graph in their head. And the prover is trying to convince the verifier that this graph is Hamilton. So he takes the same graph and he randomly permutes this graph and he generates a commitment to the graph which he sends to the verifier. At this point, the verifier sends a bit B and the prover, if the bit B is zero, just opens all the commitments and proves that this is a random permutation of the original graph. And if the bit B is one, he opens the cycle in the graph. And this protocol can be argued to be zero-knowledge protocol with some modifications. And then to make this protocol adaptively secure, if the simulator was to use EQ, if the protocol, if you used EQ vocal commitments in the protocol, this protocol would be adaptively secure. Just let's see how. So the simulator would behave in the same way with just randomly permute the graph, generate EQ vocal commitments and send it to the verifier. The property of the EQ vocal commitments is that the simulator can open any value in any way he wishes at any point. So he can open the value to zero or one as he wishes in the protocol. So he sends this EQ vocal commitment to the verifier and as you can think, because the simulator has this ability to magically open anything to anything he wants, it would be adaptively secure. But the problem with leakage resilient is that what if a leakage query happens before the challenge bit B is actually sent? In this setting, the simulator must open and this challenge query could actually, this leakage query could leak certain information about the openings and at this point, the simulator must remain consistent during this response to the leakage query and the response that it sends later on without actual knowledge of the bit B and which turns out to be problematic and hard to achieve in our setting. So the question, as it turns out, so because of these reasons, adaptive security does not imply leakage resilience and just to recap on why that's the case. So the point is that in case of adaptive security, there is no need to simulate a party after the corruption happens. So after a party piece corrupted, I just provide random points and input and my job is done. But in case of leakage resilience, even after a party is corrupted, in some sense partially corrupted, that is some information about its random status leak, I still must continue to simulate my future actions and without knowledge of what was leaked and there could be future leakage as well and the future messages must be consistent with the previous leakage. The future leakage messages must also be consistent with the previous leakage. So the key idea that we have is to have two ways for the simulator to cheat. One way of cheating would be to allow the simulator to cheat in the protocol messages, the other method would be to allow for cheating in the leakage queries. We need to make sure that since there is a delicate balance between the two techniques interact that the simulator does not step on its own feet in using the two techniques together. The second method of cheating that we use is not new, it's we just extract these challenge for simulation. The interesting part is that we need to use both the techniques simultaneously in our protocol to achieve leakage resilience. Further, as I mentioned, we wanted a precise bound on the amount of leakage that happens in the real world, in the ideal world with respect to the amount of leakage that is squared in the real world and for that we use ideas from Mikali Pass for precise simulation and it allows us to get a tight bound on the amount of leakage that happens. The second result that we have is on leakage resilience non-interactive zero knowledge proofs. As I said, the problem with leakage resilience interactive proofs was that I had to simulate even after a leakage query happened, but in the case of non-interactive proofs, this problem does not arise because by definition, these proofs are non-interactive and therefore the problem that adaptive security does not imply leakage resilience does not come up and therefore adaptive resilience implies leakage resilience. So you could pick up any off the shelf adaptively secure leakage resilience in this proof system like GOS and it would automatically be leakage resilience and I have four seconds left, thank you. Well, the next speaker is setting up. We have time for a quick question. All right, so let's thanks and jam again.