 OK. So thank you for the introduction. So consider an IO access to a device. So consider there is a device D, which has some secret S storage in its memory. And there is a user. And consider the interaction where the user can send some X and get some reply D of X. And this can happen many runs. So many runs. However, we all know the implementation of the device may not be perfectly secure, as there are very well-known side-channel attacks initiated by culture. And there is a very celebrated result by Haldeman at all, showing that a cold-foot attack can reveal many parts of your secret S. So this is captured by the following attack where the adversary can specify a function G and get some leakage G of S. And in the continual model, this can happen for many runs. So another G prime and gets another G prime of S. So in another type of attack, initiated by Biham Jamir, where the adversary may temper with the memory, so we capture this by the adversary can send some function F and replace the secret S with F of S. And this can happen many times in the continual attack. So it's very natural to consider both attack. And this was considered by us and Bhavna Kala and Sahai in previous year. So in this model, the adversary can do an IO query and can get some leakage and can also temper with the memory. And this can happen for many runs in arbitrary order. And we need to give a remark that in this model, computation or update happened between the attacks. So this is unlike the model of Mikali and Rezin who consider leakage during the computation. So in this talk, we will speak in this model, in our model where we assume that computation does not leak here. And with other technique, we can achieve further results like to deal with computation leakage. But so it's not in this talk. So we consider previous work. So if only tempering is allowed, then general at all, there are a series of positive results. And if only leakage is allowed, there are even more positive results in various models. However, in the combined attack model, we showed that if there is no randomness access for the device, then it's impossible to achieve any secure result, even in very restricted attack models. However, if there is fresh randomness on update, then it's possible to construct encryption and signatures. However, so where do we get randomness while under attack? And as the previous talk also motivated, randomness can be a very, very precious resource. So we do not want randomness in this setting. So our goal is an architecture that can tolerate both attacks at the same time without assuming on-device randomness. However, this is impossible, so we cannot set a goal that contradicts our previous result. So we need a reasonable restriction on the adversary's leakage and tempering power. So let's state our main result. So we give a generic compiler that, given any device D, produces a secure version of D. So both devices have identical IO behavior, and security is leakage and temper resilient in the split-state model with a Kanban reference string. So now we define this split-state model. So instead of storing your secret in one place, we consider a case where you store your secret in two places that are physically far apart. And the two parts are attacked separately. So this model was not invented by us. It was considered before in Biden-Bov's TPR check weeks and Dodie's Luka Waters and Weeks and Halevi Lin and the study of two source extractors and in many, many other places. So yeah. So more concretely. So in this model, the attacker, the adversary, needs to specify two functions, G1, G2, and he can get G1M1 and G2M2 for leakage. Similarly, for tempering, he needs to specify two tempering functions and he can replace with F1M1 and F2M2. So previously, D'Bov's key, at all, identified and constructed a very, very powerful primitive called non-malleable code and show how to use non-malleable code to protect from tempering attacks. So the idea is we encode some string S into some code C and because we are considering split-state model, so that will be M1, M2. Such that tempering on the code word is useless. So the high-level idea of non-malleability is saying that more in the code word does not reveal anything about the encoded secret. So consider the following experiment. So you have a string S which gets encoded to some code word, so think of this as some structure of your code word. And consider this tempering experiment. So you temper with the code word. And it can result only in two cases. So either you didn't hit the code, so you leave it unchanged. Otherwise, you can totally destroy the information of S so you can result in something totally unrelated. So this is formally captured by this. So we say for all little f, function f in the class big f, and all input strings S, S prime, the following tempering experiments are indistinguishable. So we have temper F and S, which are outputs saying if fc equals c that captures the first case of the outcome saying the tempering function does not change anything. And otherwise, it outputs the code of fc. So we say that S and S prime in this experiment, if f changes c, then the decode should be indistinguishable. So we know this is impossible for general. However, DPW showed how to construct them in the split-state model with the help of random oracle. And their result is for all unbounded functions. And for us, we construct them in the split-state model with the help of a common reference string. And we consider for all polynomial size functions. And we claim that this is a great improvement in terms of randomness efficiency, because we don't want randomness. And previously, if you want random oracle, that requires a lot of randomness. And common reference string is definitely better than random oracle. So what do we mean by polynomial size functions? So the f is all polynomial size. And in the common reference string model, the temper experiment can access to the CRS. And the encode and decode algorithm can also access to the common reference string. So how can non-malleable code protect from tempering attack? So we show that by a simulation paradigm, where the adversaries view in the real attack can be simulated in a world where there is no secret involved. So this means, so consider the following two experiments. By the non-malleability of the code, we know they are indistinguishable. So now we go to our construction, which is very simple. So to encode a string as, we sample a public key and secret key. And we store the secret key in the left-hand side and public key in the right-hand side. And we encrypt the string as, and we put a non-interactive zero-knowledge proof. And here is why we need CRS. So we require the encryption scheme to be leakage-resilient. And we require unique PK for each SK and unique SK for each PK. And this is not precise, but just think of, like, PK and SK need to be 1-1. And also we require the proof to be non-malleable or robust NIZK proof of knowledge of SK and the decryption of C. So robust NIZK means, like, even with access to the simulator, the adversary can only produce proofs whose witnesses can be extracted. This captures the case that he can only produce proofs he knows about the witness. So let's try to prove the security. So we have a reduction. And reduction is talking to the challenger. And the challenger sends some PK and reduction, samples some CRS. The adversary will say, OK, I'm going to attack the temporary experiment with temporary function S and state strings S1, S2. And reduction will ask for leakage and challenger reply. And then the reduction says, OK, S1, S2 are the strings I'm going to attack. And the challenger will encrypt one of them. And now the reduction needs to simulate the temporary experiment. And then ask the adversary. The adversary says, I prime. And the reduction will say, OK, I think it's S I prime. OK, so our goal here is for the reduction to compute the temporary experiment. So remember, the experiment needs to distinguish same or decode f of c. So the reduction needs to first prepare a code word. But the code word looks like this. But this part, M1 equals SK's reduction, doesn't have SK. So hopefully we don't need this piece of information. Also, reduction needs to provide a proof pi. However, he doesn't know the witness. But because the reduction simulates the CRS, so he can use zero-knowledge simulator to produce this pi. And then we consider the temporary function f equals f1, f2. So f2, let's consider f2 on input M2. Suppose it really modified M2. Then we consider the following case. If pi prime is invalid proof, then the temporary experiment should output null. So that's not a big deal. Otherwise, you can extract the witness S prime and SK prime from the proof pi prime. And this is because robust NIZK. And it seems that the output of this experiment should be S prime. However, this is only true if SK prime equals f1 of M1. Otherwise, it should be null. So this piece of information, the reduction, does not know. And on the other branch, if M2, if f2 doesn't change anything, we need to consider the case. If whether f changes M1, if it does, then the experiment should be same. Otherwise, it should be null. So the high level information here, I want to say is there is some information that reduction cannot compute. And we need to use leakage query here. So the reduction needs to query some leakage in order to help him to figure out which case it is. So I don't have time to explain which leakage query the reduction needs to make. However, this is a little bit tricky, and I will leave that to you. So in addition to tempering non-malleability, our code also is leakage resilient in the case that if the adversary is querying some leakage function g, and then the reduction can query this to the challenger, and still the proof will go through. So now I'm going to tell you very briefly how we can use non-malleable code to achieve leakage plus tempering resilient. So recall that our goal is to design a compiler. So first, we consider a randomized construction, which means the compiler outputs a randomized device. So what the compiler does, OK. So first, it encodes. It gets the original device d, and it encodes the secret s to some m1, m2, and stores in the right-hand side. And the right-hand side secures d on input x. First, it decodes, and then it refreshes its input. Sorry, it refreshes an encoding of its secret, and then it outputs d of x. And the refreshing part is where we use randomness. And in the second construction, we show how to de-randomize the previous one. So instead of storing the encoding of your secret, you store additionally a seed of the device in the right-hand side first decode, and then it plugs in with a PRG with the seed. So now we have a lot of randomness, and then we can refresh the encoding with a new seed prime using the random coin. And then we output d on input x. So here, in our full version of the paper, we show it's secure, and we don't have trouble with circular security. So I will make a conclusion that we trade off perfect randomness for a split-state model. We get leakage and temporary resilience for every functionality. And also, we achieve, after the fact, leakage and temporary. And this was a problem identified by Hallevi and Lin in last year. And we achieved very strong simulation-based security. And our new non-malleable code may be of independent interest. OK, thank you. Thank you.