 So, yeah, I'd like to tell you guys how to construct non-mailable codes against bounded polynomial time tampering. This is joint work with Donna Darkman-Soled and McCool-Cool-Karney, who are at UMD, Rachel Lynn at University of Washington now, and Tal Malkin, who is my advisor. So, right, non-mailable codes were introduced a number of years ago by Djembowsky, Pieterkin Weeks. With the application in mind, you're concerned about someone tampering with memory, basically. Okay, so you want to encode some information such that it's resilient to tampering attacks. And the example I like to keep in mind is sort of like related key attacks. You're worried about, you have some value and you're worried about someone, say, adding one to it or flipping a bit or setting a bit to zero, and you want to prevent these sorts of things from happening. And right, we're in the public setting. You don't want to protect your key with another key or something like this, okay? So right, what sort of guarantees informally do we want this non-mailable code to have? Correctness, right? If no tampering occurs, we should be able to recover the original value, of course. And you want some notion of security. So security here is going to mean that if tampering does occur, that you either recover exactly what you started with or something that's completely unrelated. And which of these two cases you're in shouldn't depend on the information that you started with either, okay? So how do we, how do they formalize this? They formalize this in sort of a real ideal paradigm where for any tampering function f, we should be able to simulate this tampering, okay? So the simulator is not going to, it's just going to flip some coins and it's going to output something. It could either output a message like x or it will output this special symbol same, okay? And to argue security, what we want to say is that if we wrap this simulator for any x, if we wrap this simulator such that the same symbol is mapped to x, then we want that the real experiment and this simulated experiment are indistinguishable. So right, the simulator is independent of the message, but the security should hold for all messages, okay? And most of the time this notion of indistinguishability is considered to be statistical, but computational notions have also been considered and that's what we're going to focus on here at least later on, okay? So the goal in non-mailable codes is to construct, is this like all coding theory? We want explicit constructions where you can encode and decode efficiently for robust tampering classes, okay? You can't handle arbitrary tampering. Some things that have been considered are split state, you'll hear about some of this later today, a small depth circuit tampering, some base bounded tampering, a variety of other examples, okay? But as cryptographers, what would we like? We would like to handle any efficient tampering procedure, okay? So is this possible? Unfortunately no. If we require additionally that our encoding and decoding are efficient. This is simple attack, right? So if I can always decode the message, say add one to it or whatever you want, and then re-encode it, okay? If the encoding and decoding are efficient, then this tampering function is also efficient, okay? So we can't handle arbitrary polynomial time tampering with some fixed polynomial time code. It's very different from the normal case in cryptography, okay? But if we bound the tampering function, then we can hope to make progress and we can simply have encode and decode be more, take more time than the tampering, than the tampering that we're preventing, okay? So we fix some constant C and we're concerned with now with the function F that's computable in time and to the C, okay? So is this possible? So yes, in this original paper, Symbasque and Pre-atric in weeks show that these codes exist using the probabilistic method, okay? But this construction is not efficient. Later work, Charekchi and Gurshwami, four years later, showed efficient probabilistic construction. So they basically de-randomized probabilistic construction and show that also this code, this code can be computed efficiently, decoded and encoded and decoded efficiently. Moreover, it's like an algorithm that will flip coins and output the code and with high probability, this code is good, okay? Faust et al, it's around the same time, gave another construction in the CRS model, okay? So both of these results, actually before I continue, I'd like to say that the CRS, where it's like most of the time we think of this as a good thing and in fact this construction is highly non-trivial. Unfortunately, it's an untemperable component in their scheme and the length of the CRS is in fact large, the CRS is larger than the circuits that we're actually bounding against. So the circuit can't even depend on the entire CRS. But on the other hand, these results are pretty powerful. They are actually with respect to not just bounded polynomial uniform time but non-uniform tampering procedures. And so there's sort of like not a lot of hope of doing anything explicitly here without some assumptions at least because it would basically immediately imply a very strong circuit lower bounds, okay? So, but we still can ask this question, is it possible to develop a non-trivial code for bounded polynomial time tampering without a CRS? And we answer this in the affirmative, conditioned on some assumptions, okay? So let's go through these assumptions first. So first assumption is assumption from the de-randomization literature that I will elaborate on in a moment. Second, we assume trapdoor permutations, sub-exponentially secure, which I'm assuming most of you are familiar with. And third, we assume something called a p-certificate, which is maybe less of you are familiar with, okay? And assuming all of these things, if we have this theorem, it says if assuming these things and sentient in-students of these objects, you get an explicit, efficient, unmajable code that holds against any uniform end to the C time tampering. With inverse polynomial indistinguishability, okay? So what does this little thing at the bottom mean? It means that for any non-uniform, this poly-sized distinguisher, the gap here is inverse polynomial, not negligible, unfortunately. It would be great if we could do better, but this is what we have. Okay, we already think it's very exciting. Okay, so p-certificates. What is a p-certificate? I'm assuming not all of you know what this is. It's a non-interactive argument system for any statement in p, such that the runtime of the verifier and the proof length are bounded by some fixed polynomial, independent of the language which you're proving some statement about, okay? So CS proofs, and we call these CS proofs, imply p-certificates. The key thing here, though, is p-certificates are a falsifiable assumption. Okay. The other sort of assumption which is maybe less familiar to this audience is de-randomization assumption. So what do we mean by this, exactly? We mean assumptions of the form e, which is the class of exponential time, things that languages that can be decided in exponential time, does not have x where x is some type, x type of circuits, of size two to the beta n, where beta is some constant, okay? So, right, we're going to, there's various assumptions of this form if you're depending on how you fill in this x. Okay, so de-randomization, just a brief history. In the 80s, Yao showed that cryptographic PRGs are sufficient not just for privacy purposes, but for simulating, deterministically simulating, randomized algorithms. Nissan VictorSyn sort of observed that these cryptographic PRGs are too strong. They work on arbitrary poly-time algorithms and they have very strong indistinguishability guarantees, negligibly indistinguishable, and you can relax both of these things. And they showed that if you relax both of these things, if you have just a hard-on-average function for circuits, you can de-randomize, okay? And later, it was shown that you don't actually need this hard-on-average, you can actually start from a worst-case assumption. In particular, this assumption at the top where x is just normal, your standard circuits, okay? And this was sufficient to de-randomize BPP, okay? These assumptions appeared in lots of work, de-randomizing all sorts of things. But before we continue, I'd like to, so this prior non-mable code that I was just discussing, we can view as partial de-randomizations of randomized construction. So assuming, right, and they imply, like if you want an explicit code, you need circuit lower bounds. So if we assume circuit lower bounds, then can we hope to make progress? Unfortunately, we don't know. Unfortunately, the case of de-randomizing these randomized code constructions is very different from de-randomizing languages, at least to our knowledge. So this work, I guess you could view it as a partially positive answer, but I'd like to reiterate that it's very different in that we're considering uniform tampering functions. And it's also different from the prior work in that our guarantees are computational and non-negligible, okay? But, okay, so returning to de-randomizations, assumptions, these assumptions have applications beyond simply de-randomizing things. So Brock and others showed that if you have, essentially it's X with co-nondeterministic circuits, then with trapdoor permutations, you can have one message, witness indistinguishable proofs for NP. If you combine this assumption with one-way functions, you can get non-interactive bit commitment. Applebaum and others showed that if you instantiate this X with non-deterministic circuits, you can create poly-time computable incompressible functions for the class of N to the C size circuits, okay? So what is an incompressible function? An incompressible function is basically for C, if for the purposes of this talk we'll use this definition, if I shrink my input by half, any efficient procedure for shrinking the input by half will be completely correlated, uncorrelated with this function, Psi, okay? So Psi is an incompressible function in this case, right? This notion was developed by Dubrovish I, okay? We're actually going to use all of the results in this slide in our result. So in this work, we show that if E does not have NP circuits, so what is an NP circuit? This is a circuit with SAT gates, which can use, has access to a SAT oracle. This is the assumption we use, and along the way we show that if you combine this with sub-exponentially secure one-way functions, you can get what we call non-interactive quasi-non-malleable bit commitments, okay? So I've already introduced way too many notions than I should in 10 minutes, but roughly what is this object up here you can think of it as a non-malleable code that cannot be decoded efficiently in a very strong sense, right? It's a commitment, so it has this hiding property. You can't get any information efficiently, okay? But for those that are in the know, quasi here, this is a standard non-interactive non-malleable commitment, except that we're restricting the adversary in this way that's consistent with what I've been talking about. The man in the middle is going to be less powerful than the committer and the receiver. Okay, and why are we considering this, why are we considering this relaxed notion? Basically, because we want, we already have too many assumptions. We don't, we want to avoid time lock puzzles or hard-less amplifiable one-way functions. Okay, but returning to our main result, if I have time I'll say something about this, but I probably won't. So returning to our main result, how do we achieve proof of this theorem? Okay, so our starting point is a framework of ours from a year ago at Eurocrypt, where we want, we're showing, we want to show how to, we wanted to show how to take average case hardness for some complexity class and combine it with some crypto and get out a non-malleable code for the same complexity class with a CRS. Okay, how does this work? So we're going to use the Nor Young paradigm. So we're going to encode, so say we want to encode a bit B. We're going to give two encodings. One is going to be a random input that this psi, which is our hard function for this class C, such that psi of X equals B, the bit that we want to encode. Okay, psi is the hard function. C is going to be a public key encryption of the bit B under some public key. Right, this, we want to also, we're also going to need this property that you can decrypt using the secret key within this complexity class. And finally we want, we're going to attach a proof like in Nor Young that say that C and X are encodings of the same value. Okay, this also should have very efficient verification. Okay, so our CRS is this public key, the NISIC in this CR, the NISIC CRS. And to decode, right, remember, like even though we're using crypto, there's no secrets, right, it's a code. There's everything is public. So to decode, we're going to simply verify the proof using the CRS, and then we're going to evaluate psi on X. Okay, so how do we prove that this is non-valuable code? We're going to prove something slightly stronger than normal non-valuability, but ignore it for now. So let me sketch the hybrid arguments. So we have on the left an encoding of zero and on the right an encoding of one, or the experiments, the whole experiment. Okay, so first we're going to switch to simulated proofs using the zero-knowledge property of the NISIC, then we use semantic security to switch to dummy encryptions. Then we apply simulation soundness, and we're going to apply simulation soundness to switch to a special alternate form of decoding, which is very low complexity. So using this secret key, where you can decode very efficiently by simply decoding from the CRS, decrypting the ciphertext. Okay, simulation soundness guarantees that this is okay, it won't change the distribution, the output distribution. And now we have that if we look at this, we can define this, we can look at these experiments and define this class of circuits that's taking this input x, and this is in this low complexity class c. And because of that, we can deduce from the hardness of psi that these two things have to be indistinguishable. Okay, great. There's still a CRS here, which is not ideal. And both of these elements of the CRS seem very integral. One, this first piece, this public key, we need this for this special trapdoor decoding, right? There shouldn't be a way together. There can't be a way together on this, basically. And two, this NISIC, NISICs without CRS are strictly impossible. So what are we going to do? Well, let's focus on the first piece. So recall that we need some sort of trapdoor decoding. So what can we do? The idea here is to suppose that psi is not just hard on average, but instead is incompressible, like I said before. If you try to compress it to, say, half the length, then what you get is statistically completely, like not completely, but statistically uncorrelated with what you started with, or at least with the correct value of psi. Okay? So if we assume that our ciphertext is very short, then we can hope that it's infeasible for any efficient procedures to simply output this ciphertext, given X, okay? So output any correlated ciphertext, okay? So the first thing we're going to do is rather than use the public key encryption, we're going to switch to just a statistically binding commitment, okay? So, and then if we look at this sort of penultimate hybrid in the middle of the slide and we blow it up, instead of the alternative coding procedure from before, we're going to have this two-phase alternative coding. First, we're going to verify the proof, and then we're simply going to output the tampered ciphertext that we received. And then the second phase is going to be extract from this tampered commitment, okay? So the first part, the top phase, if we can switch the experiment up to the using just the first part of the decoding, this is going to have low complexity and it's going to take in this long input X and output a short ciphertext, which is great because this is exactly the situation we need. The second part is inefficient, but because these guarantees are statistical, we don't care, actually. So it's enough to argue indistinguishability already here, just from incompressibility, okay? So great, we took care of one aspect, but we still have this Nizik CRS. So, right, Nizik's without CRS is impossible, but this is for non-uniform provers. So because we're considering this uniform tampering setting, there's some hope. And in fact, older paper from Barack and Raphael pass, they considered exactly this notion. They showed that you can construct these things, they called one message zero knowledge proofs. This is an Nizik without a CRS with guarantees against uniform adversaries. And what it's not too difficult to deduce that using some of these, another result I mentioned before and this FLS framework that the assumptions that we're using one, two, and three are enough to instantiate these one message zero knowledge proofs. The problem is, if you recall, we don't need just zero knowledge. We still need simulation soundness. But that's not entirely true, because right, we're in this very specific setting where we're just tampering using a relatively efficient tampering function. And so our solution is that, well, okay. So our solution is to, first of all, replace this commitment with a non-malleable commitment. This will basically guarantee, this will give us the same guarantees that simulation soundness was giving us before. Okay. And the second is from the second observation that we don't need this really strong notion of simulation soundness. We don't need to hold our, we don't care about like arbitrary attacks where all of our attacks are of a very specific form given this simulated proof. You know, you can mallet very efficient like in N to the C time to get another proof. So this notion of quasi non-malleable commitment is in fact sufficient for their purposes. Okay. So just to back up, we show that assuming E is hard for exponential size NP circuits, assuming also trapdoor permutations that are sub-exponentially secure, and P certificates, we can construct efficient non-malleable codes for bounded polynomial time tampering. And along the way, we construct non-interactive quasi non-malleable commitments and show some other connections to complexity theory that hopefully you all will find interesting. And I'll end there. Yes. That's, yes. Yes, yes, the problem is like detecting like whether a code is good or not is hard. Okay. No, no. That's the, that's the big, that's the big difference. So you can't just like, yeah, like in a language you can take majority or something like this here. You know, you don't, you don't, you don't, exactly, exactly, exactly. So any other questions? Yes. So Chiragji and Guru Swami, you mentioned had a result that did not use CRS, right? Sorry? Chiragji and Guru Swami, you mentioned had a result that did not use CRS and had a non-malleable code in the same model. This was shown in one of your. In the early slide? In the initial slide, yeah. Yeah, but again, it's, it's not so different because it's a, it's a, it's not a fully explicit construction. All they show is the sort of Monte Carlo style procedure that with high probability outputs a good code. But you can't really put your finger on which one is, whether a code is good or not. So with overwhelming probability, it's good. But you know, it's still not, you still don't know. And as you can view the CRS is sort of, it's a similar setting, right? You fix a CRS, it will be a good code, but. Any other questions? If not, I ask one. What's the rate of your code? Some fixed polynomial. Okay. Then like you said, so you work for uniform and to like bounded point of time and with inverse polynomial. Do you think you can get negligible error or go to non-uniform? That's a very good question. You think it's doable or is there some barrier? The barriers I would, I think are, so for example, this incompressibility, Benny can probably speak more to this, but the incompressibility is on the barrier, the reason it's only polynomials because it's the incompressibility only gives you polynomial distance guarantees. So I don't, you would need to probably, at least following this framework, you would need to probably improve that before. Do you think we can prove it's impossible to do it? No, really. I don't know. It's a good question. Yeah. Yes, yes. That's a very good point. You may be correct. Okay. So if no questions, let's thank speaker again.