 So I'm going to talk about simultaneous amplification, the case of non-interactive zero knowledge. And this is joint work with Vipul and Amit. So in this work, as you can guess from the title, we just consider a very basic question. And the question is that, suppose I give you a non-interactive zero knowledge scheme, which is not fully secure, by which I mean that it doesn't have full soundness and full zero knowledge property. Then we construct an ISIC argument system, which satisfies full security. So let me define the problem formally and show you how we do it. So let's recall what non-interactive zero knowledge is. In non-interactive zero knowledge argument system, there is a trusted party, which outputs a common reference or a random string, as you will. And then any prover can take an instance and a witness for that instance, along with a random string and output approved to convince a verifier of this fact. And it has two security notions associated with it. The first security notion is the soundness notion, which is associated with the following experiment. The experiment is that the trusted party gives out a CRS randomly. And then there's a malicious prover, which is think of it as a polynomial time malicious prover, who wants to convince the verifier of an incorrect state in a statement, which is not true, with a proof that verifies corresponding to that CRS. And we say that an ISIC argument system is delta s sound if the maximum probability with which this malicious prover can convince the verifier of such a false proof is bounded by delta s. And I will refer to this delta s throughout the talk, delta s is I will refer to as the soundness error. Likewise, there is also the zero knowledge property in which there is a simulator. I will denote it with Mysterio. And what it does, it takes an instance x, and it can fake a CRS and a proof and gives it out to the verifier, and then verifier can verify this fake CRS and the proof. And the zero knowledge properties say roughly this, for any polynomial time verifier, any cheating verifier, it holds that the probability with which it can distinguish between a fake proof and the CRS generated by the simulator with the anonymous proof and the proof, this is bounded by probably delta z. And I will refer to this delta z as the zero knowledge error. Okay, so now let me try and discuss the relation between these two parameters. So if you look at standard Nizek arguments, we have this property satisfied. So take your favorite Nizek argument that you know from LWE or bilinear maps or even trapezoid permutations. It holds that delta s plus delta z is negligible because they are fully secure. On the other extreme, there's a setting which you can construct really, and that setting is when delta s plus delta z is equal to one. This is because I can give you two particular examples, let's just take delta s zero and delta z one. This you can easily construct by having the prover always output witness in the clear. So because it's outputting witness in the clear, there is no soundness loss. And you can't expect zero knowledge because the witness is revealed. And there's a complementary condition when delta s is one and delta z is zero. In this case also it's very easy to construct an Nizek argument by just having the prover output zero and having the verifier always accept zero as the valid proof. All right, so what we do in this work, we show that let's take a setting which is very slightly non-trivial. We just have delta s plus delta z a little bit less than one. Let's say it's one minus epsilon for an arbitrary small constant epsilon. And then if you're willing to assume public key encryption with some exponential security, what we can show is that such an Nizek is enough to imply a fully secure Nizek. And along the way, we also introduce a new object, which we call as secret sharing for NP instances, and I will talk about it later in the talk. So let's just, so first I will describe how the overall approach of this work. So what we do is we define two transformations and somehow use those transformations one after another to start off with a weak Nizek to a fully secure Nizek. So the first transformation is a soundness amplifying transformation. And the goal of this transformation is to improve the soundness of the Nizek. So it takes a delta s, delta z Nizek with an assumed sub exponential pk, and it improves the soundness from delta s to delta s to the N, whereas N is any polynomial parameter of your choice. So it improves the soundness, however, what we show is that it doesn't kill the zero-knowledge property completely. We show that it goes from delta z to this parameter that I have written, 1 minus 1 minus delta z to the N. Don't worry about this at the moment, but this is what it does. The second transformation is the zero-knowledge amplifying transformation. Its job is to improve the zero-knowledge property, and we show that analogous to the soundness amplifying transformation, it improves the zero-knowledge from delta z to 2 times delta z to the N. So it exponentially improves the zero-knowledge. And we also show that it doesn't kill the soundness completely. So it goes from delta s to this number 1 minus 1 minus delta s to the N. So now let's see why these two transformations are enough. And so the reason is that you can use these two transformations one after another to construct something secure. So let me not bore you with the general theorem, let me just give you a concrete example. So let's say you have delta s as 0.3 and delta z is 0.6, so that soundness error plus zero-knowledge error is actually quite large, it's 0.9. So in this setting, what you can do is first apply the soundness amplification with this parameter log lambda, and what you get is something like this. Don't worry about the calculations. These parameters are correct, but you get something. And then you use this nizek and apply the zero-knowledge amplification with this parameter, and what you get is something constant and negligible. Finally, you can just also do the soundness amplification again to get something fully secure. So this is why these two transformations are enough for our purpose. All right, so let's now look at these two transformations. The first transformation is the soundness amplification, and as you already know, how do you increase soundness? You just repeat it in parallel, and this has been studied in many, many words historically. So what we do is that we have, we just have some parameter n and have the CRS, the generation be just sampling n-independent copies of the CRS using your underlying nizek. And then, if you want to prove a statement x, then what you do is using each copy of the CRS, you generate an independent copy of the proof for the same statement x. And that's it. That's your proof. So we know this time and again that this improves soundness. So this is the latest version of the soundness amplification, which directly applies to some parallel repetition, and this was due to Kennedy-Hellevian Steiner. And we also re-proven this in our paper, what they showed is that suppose you have a nizek argument system, which is say delta s sound, and the soundness holds against some adversaries of size s, then when you repeat it in parallel with the n repetitions, then what you get at the end is something like delta s to the n plus a small additive factor n epsilon, and it holds against the adversaries of slightly smaller size, which is o tilde of s epsilon. And I am hiding some factors in o tilde, but they are not important. So let me repeat again, what it does is it takes delta s sound argument, which holds against adversaries of sizes. What we get is, what ideally we would expect is delta s to the n, but we don't quite get that. So delta s to the n plus some additive factor n epsilon, which also factors in the reduction of the size for which against the class of adversaries, which the resulting system holds soundness against. And further in this work, we show that if you are willing to assume public encryption, then even the adaptive soundness is conserved. So this was soundness amplification, and this we already know how to deal with it. Let us now focus on the zero-knowledge amplification part. This is new to our work. So in zero-knowledge, the first idea that comes to mind, which I would show is wrong, is the following. Suppose you want to increase zero-knowledge property, what you would think is, let's just compose one proof with another. So let's have the CRSB, two independent sampled CRS of the underlying music. And let's prove x is in L using the inner, this x using the CRS1. And then you can use CRS2 to prove that this proof that you generated is valid, and you will output pi 2. So that's the idea. However, you would expect that this shouldn't increase zero-knowledge because suppose inner one is delta z secure, then on composition you would think that you should get something like delta z square. And what we show is that this intuition is completely flawed. In fact, there's a very simple counter-example to this fact. And this counter-example is the following. So suppose you want to build a Nizik, so a counter-example, let's just start with a secure Nizik, and then we build the counter-example as follows. So the CRS generation for this counter-example is just the CRS for the underlying secure Nizik. And then in order to prove a statement, if your statement has some specific structure, let's say it has bought appended with some string, in that case you just output the witness in the clear along with this statement. Otherwise, you just sample a bit B. If B shows up one, then you output the proof honestly generating the underlying Nizik argument honestly. Otherwise, if B is equal to 0, you just format the string with bought and output it along with the witness. So don't have to worry too much about the detail, but the point I'm trying to make is if you use such an argument, then there's no hope to increase soundness beyond one half. There's no hope to increase your knowledge beyond one half because with some half probability clearly leaks the witness. So we need to develop new technology in order to handle this issue. And what we do is we define something called as verifiable sharing for instances. And here the goal is that suppose you have an NP instance and a witness, you will secret share it into an instant witness's pair. So XW is the instance witness, and you will get shares X1, W1 through X and WN such that there are two security properties that are satisfied. The first security property is that if X1 through XN are like an adversarial sharing of X, and they happen to be in the language, then X should also be in the language. So this holds with respect to adversarial sharing of X, W. And the second property is that if X1 and XN were honestly generated shares, and I happen to just leak out N minus one witnesses, then those witnesses should not leak any information about the membership of X. We can actually construct it in a not very difficult manner. There is this thing called MPC in the head, which was proposed by Ishae Kusilovits Kostropsky and Zahae. And then we can instantiate it for semi-honest MPC protocols with perfect correctness. This in a very not so indirect manner gives you this notion. And you can use public key encryption to implement commitments for this. So I would refer you to the paper for construction. So now once you have the sharing, the zero knowledge amplification question is very simple. What you do is just have independent copies of CRSs as a part of the CRS, have this public key at the setup for generating commitments. And then to generate a proof, you just secret share your XN witness to generate shares. And then for each XI, you will use the CRSI to generate the proof. And that's it. That's the output of the zero knowledge amplification, the compiled version of the argument. So now let's now look at this transformation. In zero knowledge amplification case, we have, we can prove the following thing about the soundness. We prove that the soundness is not completely killed. And we show that suppose we have a delta S sound argument system, which happens to be against the adversity of size S, then when you use this transformation, then what you can get is something like 1 minus 1 minus delta S to the n plus order of epsilon sound against adversity of slightly smaller size, that is S times of epsilon. Again the same phenomena happens here. We started with delta S and we would expect 1 minus 1 minus delta S to the n, but we don't quite get that. We get an additive term, which also factors in the reduction of the size of the adversity. So this is the main theorem for zero knowledge amplification. And what we show is if we started with delta Z zero knowledge, assuming public key encryption, which is some exponential secure, what we get is 2 times delta Z to the n plus order n epsilon zero knowledge property against circuits of slightly smaller size. And the two term here comes just because we first argue witness indistinguishability and then we argue the zero knowledge. So there is a factor of two laws there, alright. So let's now look at the how do you argue soundness theorems in both the cases. And the idea was present even before our work. This is this notion of verifiable, weakly verifiable puzzles. And I won't talk about it in detail. You can look at the hour paper or also the Kennedy Hallevi-Steiner paper. Let's now look at how do you argue zero knowledge theorems. So for arguing zero knowledge theorem, we actually need to rely on this machinery called hardcore sets. And we rely on this very beautiful theorem by modern Tessaro who proved an indistinguishability version of this hardcore set theorem and it's on the next slide. So it looks scary, but actually it's what it says is quite simple. What it says is that suppose I have two functions E and F, which are gamma indistinguishable by gamma I mean this probability is gamma to any adversary of size S. Whenever X and Y are sampled randomly, suppose I have E of X and F of Y are gamma indistinguishable then what you can do is find sets S0 and S1 which have high enough density like 1 minus gamma such that when you sample randomness from S0 and S1 then E and F become epsilon indistinguishable for adversaries of slightly smaller size which is S times epsilon square. Let me repeat myself. What this theorem says is that if you have two functions E and F which are gamma indistinguishable against some class of adversaries of size S where X and Y were randomly chosen. Then there exist two sets S0 and S1 of density 1 minus gamma 1 at least such that when you sample from them E and F now suddenly become epsilon indistinguishable where this indistinguishability holds against adversary of slightly smaller size. So now by choosing this epsilon appropriately you can set you can choose this advantage appropriately. So this is a very powerful theorem and it is used in almost all amplification results. So here is our zero-knowledge amplification strategy unfortunately I would not have full time to talk about like the complete details but here is the first idea. So the basic idea is consider a mental experiment where you are sampling a uniform string. So as I told you the for NISX since it is delta Z zero-knowledge it has a hard core set or a good set of density 1 minus delta. So a uniform string will lie in the hard core set with probability 1 minus delta Z and it will lie in the complement otherwise. And so we can actually use this property to construct a hybrid like this. So what you do is you sample a string you sample a string like this such that 1 is set with probability 1 minus delta Z and 0 is sampled with probability just delta Z. So now when you get a 1 you generate CRS1 and the proof 1 from the hard core set otherwise if you get a 0 you generate it from the complement of the hard core set. And remember when you are in the hard core set you can switch this part to a simulated proof and your goal would be to somehow simulate at least one of the indices. So that now you can get rid of one of the witnesses for one of the excise and then it will become a hybrid will become independent of the witness and that will allow you to argue security. However, this is a simplified approach this as is cannot work and the reason behind is the following. The reason is that if you look at these sets hard core sets they are extremely inefficient to sample from and they are very complex descriptions and so if you sample from them you can actually completely break the security of the secret sharing scheme. So it seems like we are in some trouble but fortunately what we have at our disposal is that the sets have very high density and so if it is like 1 minus delta Z dense where delta Z is like 1 over poly then it has high density and then it does not leak too much information about the secret sharing scheme. So we can formalize this argument using an idea which was also there in my work with Prabhanjan and Amit on IO and we use those ideas here to formalize a way to efficiently sample from hard core sets and I will refer you to the paper for details. And with this finally let me conclude with some open questions. So first is it possible to get rid of public encryption to get an unconditional result? The second question is can we construct this amplification with better parameters? So right now we have this our compiler is not very efficient. Also there is another annoying thing that is we can only handle when delta S plus delta Z is 1 minus some constant. Can we go all the way to 1 minus 1 by security parameter? And finally another interesting direction is can we study simultaneous amplification for other cryptographic primitives? With this I would like to conclude and please ask me questions. Thank you Aiyush. Do we have any questions? If not let's thank Aiyush again.