 Okay, yeah, so I want to tell you about this note about a perfect correctness by de-randomization. This is joint work with Vinode. And this talk is really about randomness. This is the main theme. And we all know that randomness is crucial. It's necessary for cryptography. It's the error that we breathe. Without it, we can't do anything, right? But it also comes with the usual price of randomized algorithms, which is that they make errors, right? For example, public key encryption schemes that are randomized often make decryption errors. And this will sort of be the running example for this talk. So why should we care about errors? After all, when we do care, we probably don't care with negligible probability. So why care about them? So first, I want to say it's sort of a very natural question from an aesthetics perspective. Why should we have errors if we want security? But beyond that, something that seems more significant is that actually errors often lead to insecurities, very often lead to attacks. One simple example is when you try to construct a commitment scheme from public key encryption. If you have decryption errors, you'd be able to equivocate another quite well-known example or chosen ciphertext attacks. And these are not just attacks on specific schemes, but also against our methods of actually immunizing schemes against such attacks. So what can we do about it? Can we avoid such errors? And this is a question that has been quite extensively studied in the context of algorithms and complexity. And there, the answer is sort of very clean. It's the randomization. We can actually try and randomize probabilistic algorithms and then sort of get rid of errors. We can, in fact, under mild hardness assumptions, know that we can take any randomized algorithm and sort of switch its coins to use fake pseudo-random coins, which we call Nissan-Wigderson coins. And I'll talk about those soon. But the point is that we cannot hope to do this for crypto. We can't really hope to completely de-randomize a cryptographic scheme. Because, again, deterministic cryptographic schemes are insecure. So what do we know to do? So we do know how to solve this problem, at least partially solve this problem in certain specific cases, in particular in public key encryption. And the reason I say that we know how to solve it partially there is that what we basically know how to do is sort of shift the errors to key generation, meaning that for most keys we won't have decryption errors. But in general, this is a problem that we don't know how to solve. So let me tell you what this note is about, what we do in this work. So we basically show a simple transformation that can eliminate errors from a large class of cryptographic schemes. And roughly speaking, the way that we do this is we combine two types of pseudo-randomness. One is this Nissen-Wigerson type of pseudo-randomness from complexity together with the cryptographic pseudo-randomness that we all know and have been using for a long time since it was introduced by Blamikali and Yao. OK, so let me be a bit more precise about what we do. So what we show is that you can take a cryptographic scheme that for any given input may make many errors, but on most of the time, for most random coins, it will be correct and turn it into one that is perfectly correct. But we cannot do this for any cryptographic scheme. We do it for a certain class of cryptographic schemes, which are secure under parallel repetition, which roughly means that if you run the scheme many times, it's self-composed. It remains secure. And this is something that is automatic for public encryption or obfuscation and many other cases. But it's still a feature, something that you should require if you look at multi-party computation. Sometimes it will hold, sometimes it won't. So this is a condition that we'll need. And what are the underlying assumptions? So from a cryptographic perspective, it's a simple assumption, one-way functions. And we also have an assumption that comes from the complexity from the Nissen-Wigerson type, randomization, which is basically that there are computations that you can solve in uniform exponential time. But sub-exponential circuits, even nondeterministic ones, cannot solve it. It basically says that nondeterminism and non-uniformity doesn't significantly speed up computation. And it's a mild complexity assumption. It's sort of the assumption that we use to the randomized AM, the equivalent of NP. So I want to mention one corollary of this transformation. We, in fact, noted in certain settings, for example, in public encryption and in obfuscation, we can even sort of deal with a situation where there are even errors on many of the inputs, not just over the random coins. And this transformation usually results in schemes that are correct for any inputs, but still error over the randomness of the scheme. We can now get rid of it and get perfectly correct public encryption or obfuscation. OK, so for the rest of this talk, what I'd like to do is tell you a little bit about the basic idea behind this transformation. And for this, I need to tell you something about this Nissen-Wigerson type of pseudo-random generation that I mentioned. And here the goal is the following. So you have a randomized algorithm. And you want to make it deterministic. And the way that you're going to do this is by somehow using this specialized pseudo-random generator, a Nissen-Wigerson pseudo-random generator, that has this very nice property that the seed is extremely short. How short? It's actually logarithmic. So if you have a logarithmic seed, then the way that you can randomize is really enumerate over all seeds. So this is basically the way that these guys worked. Now, of course, as I described it, it doesn't make sense against algorithms that run in arbitrary time. This is very different from the way that we think about pseudo-randomness in the cryptographic setting, where we want a single fixed pseudo-random generator to full arbitrary polynomial-sized adversaries. Here it doesn't make sense, right? The adversary might be able to run this pseudo-random generator. The seed is very, very short. So the right picture to look at is this. We really, in this setting, think about a fixed time algorithm. And we know the running time of this algorithm ahead of time. And now we can accordingly design the pseudo-random generator. So the pseudo-random generator in particular can run for longer than this adversary, sort of swallowed the adversary algorithm. And we also allow some slack in the indistinguishability. It doesn't have to be negligible. You can think about, you know, 3 over 4, something for the sake of this stock. So these are Nissan Vegas pseudo-random generators. And they can be constructed even under worst case assumption of the type that I just mentioned. And they've been used to randomize algorithms in general. But they've also been used in cryptography, not for correcting errors, but actually for removing interaction. So Ba-Kong and Vadan showed that in certain cases, like ZAPs or Orr's commitments, you can use these objects in order to shave one message from the protocol. And then these actually need a slightly stronger assumption than usually, which is the assumption that we use, which also means that we're not only trying to fool deterministic algorithms, but also non-deterministic algorithms. They can have a non-deterministic guess. And we'll see later on where this exactly comes in. OK, so we have this object now. And we'd like to try and use this in order to correct errors in cryptographic schemes. And let's, again, think about our running example, which is just public key encryption. And here we have the encryption and key generation algorithms, which are randomized. Each one has its own randomness. Of course, something happens. You generate keys. You encrypt a certain message. We're going to think about this message as the input. And then you're the crypt. Now we can try, of course, think about this entire thing as a randomized algorithm over the randomness of the key generation and the encryption. It runs in fixed time if you fix the size of the message. So if it makes the encryption errors, maybe we can just do randomize it. And now it will work. But again, I'm repeating myself, this might be correct. It will be great, but it will be insecure. Won't be great in terms of security. So how are we going to use this? So the basic idea is the following. We're going to somehow decouple security and correctness. We're going to show that you can sort of split the randomness of this system, of this randomized algorithm, into two parts. The security is going to come from one part of this random string. And the correctness is going to come from the other. And we're simply going to exhort them together. Now once we have that, what we're going to do is basically in order to generate the randomness for this second part, we're going to use the Nissan Bigerson pseudo-random generator. And we're also going to have to generate somehow the parties in charge of security. And this is something that we're going to do using cryptography. So before I tell you exactly how we do this, just to understand the high level picture, once we have such a scheme where we manage to separate the randomness in this way, then we're going to follow the usual paradigm behind the randomization, which is just to enumerate over all of these Nissan Bigerson pseudo-random strings, we're going to look at each and every possible seed. Remember, there are only polynomial many. The seed is basically logarithmic. And this should already guarantee security. And for the second part, for the cryptographic part, we're going to independently generate these cryptographic random strings in each one of these instances. And then at the end, of course, you just take majority. This is the usual thing you do in the randomization. These things will be correct most of the time and will get the correct result. Now notice, because we're basically applying this scheme over and over again, as many times as the number of seeds that we have, then we really need here. This is where security under power repetition is needed. Again, for encryption, this is basically for free. OK, so let's look a bit more closely and try to understand the different parts here. So what do we want from these strings? So from the string that is in charge of correctness, what we're going to require is that most of the time, for most choices of these strings, we're going to have perfect correctness. Regardless of how you're going to choose the cryptographic string as long as you choose it correctly, then we're promised that we have perfect correctness. This is what we want from the correctness string. What do we want from the security string? We want that for any fixed string I want that is in charge of correctness, we know exactly how it generates just a Nissan Vigerson string. We have security when we sample R2 at random according to this cryptographic process that we need to describe. So this is basically what we want from the two strings. Let me tell you how we get them. So let's start with security, because this is sort of simple, at least understanding the transformation itself. The way that we're going to generate the string is in charge of security. We're going to generate randomness for each one of the algorithms separately. Here we have the encryption algorithm and the key generation algorithm. But we're not just going to sample them at random. You can think about it and see that if you do that, then you won't be able to ensure correctness. The way that we're going to do this is that we're going to use cryptographic pseudo-random generators. So each one of the strings for the cryptographic algorithms is going to be generated using a pseudo-random generator. The usual one, the one that you're used to thinking about. And all that we're going to need is that its image is sort of sparse enough. So it's expanding enough. We'll see in a second where it comes into the picture. But at least in terms of security, it's clear that this is OK. So pseudo-random strings are as good as truly random strings. So our cryptographic schemes are basically using kosher randomness. And everything will be secure as we expect. So let's try to understand now where does correctness come from. And why is it the case that for most strings R1 will get perfect correctness, basically? So here's what we want to do. So what we want to do here is we basically want to think about the randomness space for the scheme that we care about, in this case the public key encryption scheme. And we know that it has a set of bad randomness. We said that it's not too large. It's less than half. And actually, I'm going to assume that it's really tiny. OK, it's negligibly small. You can think about it as exponentially small. This is actually without loss of generality. We can amplify here by repetition. So we have this tiny set of bad randomness, those random strings that would lead to the encryption errors. Now, if we look at the first part of our string that we generate with the pseudo-random generator, then it already seems like we're getting somewhere in the sense that we know that the image set of the pseudo-random generator is also pretty small, because it's a pseudo-random generator. It's very expanding. The image is very sparse. So at least we can hope that we can sort of leave in the same reality without conflicting. But of course, in the worst case, it could be that these pseudo-random strings do intersect with our bad random strings for the algorithm. So what we basically do is just shift them at random. So this is the second part of the string that is in charge of correctness. And what this guy does is basically just shift the bad set of strings, or actually the pseudo-random strings, at random. And we know that with very high probability you can take a union bound. This set would usually not intersect the bad set of randomness and will certainly occur for more strings. So this is the basic idea. And it's actually a known idea. This is something that is known as reverse randomization. This trick of just shifting your space of randomness. It goes back to Loutman's proof that BPP is in the polynomial hierarchy. It has since then many cryptographic applications. You might see that they have something in common. I don't know. Maybe now you see that they have something in common of these applications. And if you're colorblind, maybe now you can see that, yeah. This is a trick that, for some reason, morning or used again and again and again, almost felt like we need to call him and ask if it's OK to use this trick here. OK, so this is one remark. This is a pretty useful idea. The second one is I didn't really tell you where we used pseudo-randomness against non-deterministic algorithms. So we said that we need our pseudo-random Denison Vigderson, pseudo-random generator, to also fool non-deterministic algorithm. And the reason is the following. The reason is that testing that actually this string of things in chart of correctness is really good. Namely, it doesn't intersect the best pseudo-randomness. It's not something that we can do very efficiently, but not deterministically, because we need to know the seed for the pseudo-random generator. And this is something that you can guess, but you cannot do it deterministically. And this is where it comes from. OK, so let me end just by giving a recap of this construction, which is pretty simple. At the end of the day, you take your random string, you split it into two parts. You generate one with cryptographic pseudo-randomness. The other was the pseudo-randomness you need for the randomization complexity, and you take the majority. That's the entire transformation. So that's it. Thank you. OK, thanks for the talk. We have sometimes for questions. So can you explain to me why can't you use NARS trick directly, just using a short seed for the PRG? And assuming that you will probably miss all the bad sets, what was the? There's the random string there. So if you use the NARS trick, what you're stuck with is you have basically, you can think about it as an intermediate model, where you have a scheme that you can sample one random string, this shift that will give you perfect correctness, then you're fine. But now what about this random string? It has to come from somewhere. So one thing you can do is actually non-uniformly fix it. You'll get a non-uniform scheme. And what we do is de-randomize it.