 that the power of a real world, real world adversary is as much as the power of an ideal world adversary, a polynomial identity world adversary. That means that whatever the ideal world, whatever the adversary learns in the ideal world, that is the, through this simulator, the real world adversary learns as much, right. And in the new security definition, we are saying that, hey, no, we are not having a polynomial adversary leader. Instead, whatever the ideal world adversary learns via a super polynomial adversary leader, a real world adversary learns just as much, right. And consider a situation where for some functionality, in the ideal world, the honest party's input is information that he did it, right. In such a scenario, now the ideal world adversary learns nothing at all, even though he is running a super polynomial, right. And so the ideal world adversary also learns nothing, right. So, this is just one motivating sample, but there are, even for cryptographic functions, we might just be willing to assume some additional security laws than what we did in the polynomial, so, transimulation world. That is what we are looking at. And the other thing is that, we must evaluate reductions to sub-exponential assumptions as being pretty standard. So, therefore, this is just analogous from the reduction world to the simulation world. And another motivating example is that, as you will see in this talk and you might have seen earlier, this has to overcome several values that we face when we test it ourselves to just polynomial and transimulation. So, I hope I have convinced you about the meaningfulness of this model. And before we jump ahead to the results, let me briefly mention what is the witness indistinguishable argument. It is again similar to a generalized argument system. We have a prove and a verifier, but the security requirement against a malicious verifier is slightly different. So, the verifier first picks some statement x in the language and two witnesses w, 0 and 31. Both of each of its can individually, independently certify that x is in fact in the language. And it just gives the super lower to the poor. The prove picks one bit b, and then he just runs the prove argument using this particular witness element, either w, 0 or w, 1. And the security goal is that at the end of this protocol, the verifier should not learn which way to state it, which means that one of the two witnesses, which witness was used is impacted. So, just to recap, in a zero-knowledge argument system, the verifier learns nothing at all about the witness. Whereas in a WA argument, the verifier doesn't learn which particular witness was used. And an important property here is to know that there is no simulation at all. It's a completely traditional English interpretation notion. So, now I'll move on to our results. The first result we showed that we can, in fact, achieve two-round SDS for knowledge from any of these substantially secure assumptions, either DDH or quadratic acidity or N-th acidity. And as you can see, these assumptions are quite general and standard. And previously, such protocols are only known from either capital permutations or by name of add-ins. As a preliminary, you can see that we would get two-round witness indiscriminate visibility right in the standard model from the same set of assumptions, because because there is no motion of simulation at all in WWI. Once again, this is only known from the same set of assumptions before either capital permutations or add-ins. At this point, I'd like to mention that independently another work by Jane and others achieved the same result for WI, but they had some other results which are not similar to that work. So, this is the first result. And now, let's move on to the more general setting of circular computation for all functionalities. And again, we achieve a similar notion where we've achieved two-round 2-party computation, where only 1-party gets the output from the same sub-expansion assumptions. And of course, if you want both parties to get the output, it's directly extended to this total three-round protocol. As a preliminary, you also notice that our protocol, in fact, achieves concurrent 2-party computation in this notion of fixed role setting, where the adversary has to be either a recipient in all sessions or if there's a central in all sessions. And we're not going to get into the details of this. Very briefly, compared to, in fact, several executions of the same protocol can happen in that way. It's another contribution. We achieve two-round oblivious transfer. We achieve a simpler construction of it from the anti-residuality assumption in that assumption. And we also give a new construction of this two-round oblivious transfer from where the central is. And looking ahead, this two-round OT would be the central tool in all our protocols. So we just give a new base of achieving this. Our Spanish star could have been, because this is now the standard notion of two-round OT, there's some weakening to the security environment we're now going to get into the future. And in fact, our two-round OT by the invitation protocol is subsequently used to construct two-round non-minimal commitments in a recent breakthrough. And for that several other techniques. So let me give you a brief insight into how we achieve our results. I'm just going to focus only on the zero-knowledge protocol in this talk. Let's recap and first think about Blum's Hamiltonian protocol. So let's say there's a pro-work and there's some in protection W and a red file. Initially the pro-work sends some string A. It's not important what the string is, but if you recall, this is just a set of commitments. The red file picks some challenge and leaves some break. And the pro-work opens some of these commitments in some way. So it's not important to us what these commitments are and how they open, but just think of the data 3-round protocol. If it's any standard sigma protocol where the pro-work initially sends some string A, the red file picks a challenge and the pro-work opens some one or two. And as you recall, this is a 3-round protocol that only gives you standard, like, constant, soundless error, but this is really not for us to try. And the natural question is if you want this as a standard protocol, it's, of course, 3-round and how do we switch it to just 2-round? And also, recall that some of you have some constant soundless error, so you need to boost the soundness as well. So let's ignore the soundness issue for now and let's just think of how do we squash this protocol into a 2-round protocol. Interestingly, what do we want? We want to say that the first round message A is not sent by the pro-work. If you can magically eliminate the first round, then we're done. Looking ahead, this is where the notion of oblivious transfer is going to come into the picture. So let's recall the notion of oblivious transfer. There's a center who has two inputs, M0 and M1, and a receiver who has a good B. So we want a 2-round oblivious transfer, and some message, some function of this message. And the sender sends back some message, and the end of which the receiver learns the output, and B. The security requirement is that the sender should not learn the B, and the receiver should not learn M1 in a stream. Again, the reason I was telling you is that we want to weaken the security requirement and say that instead of requiring a simulation-based security notion against a malicious sender, we would just say that the sender cannot distinguish the receiver's B2-0 or M1. And it's not important for the rest of the talk. You can just imagine that this is a standard 2-round oblivious transfer. And this parameter was constructed based on easy-to-be assumptions that we eventually base our protocols on, that is BDX, port, and it's B2-round oblivious transfer protocol. So now we have a 2-round oblivious transfer protocol, and we have a 3-round 0-round oblivious protocol that we initially wanted to start with. So how do we get to the first round oblivious transfer? The idea is that instead of the sender sending his first round, the receiver is not going to send his BD in the clear. Instead, this BD is going to be encrypted or rather hidden via the oblivious transfer. What I mean is that the receiver, as before, has a red file, picks a random challenge with B. And then instead of sending with B in the clear, he sends it via the oblivious protocol. So intuitively, you can think that this BD is now hidden from the sender's point of view So now what does this mean? The sender can now sample this original first round message A after receiving the receiver's message and now send this across along with C0 and C1, as before. But instead of sending C0 and C1 in the clear, because he only wants to send one of them and he doesn't know which one to send, he would just again hide them via the oblivious transfer. And in addition, in order to get some makeup like this here in the proof, they're also going to make the sender send commitment to each of these two trains, C0 and C1. This is a standard non-intractable commitment or you can think of it as a two round commitment. So let's think back and how it would complete as well in this setting. The receiver is first going to, the verify is first going to compute this value Cb by running the OT protocol. And then once you know Cb, it is verified that this is in fact what is committed inside this and then run the original Lums Hamilton C protocol. Now you might stop and wonder how does the receiver verify the commitment, think of the commitment and the randomness being part of the OT messages. So you'll see both the message being committed to and the randomness used for the commitment by OT. And just going to state that detail for that too. So our completeness seems to be guaranteed to get soundness and clear knowledge also quickly. Let's go ahead and see. So before we get to the proof, let's say that, so we have two parameters here, the oblivious transfer and the commitment. And let's say that the receiver's oblivious transfer message is rateable only in time t1, where t1 is some parameter and secured against all adversaries running in time less than t1. That means that just getting the message of t1, no adversaries can get over the bit B inside it in time less than t1. Similarly, let's say that the commitment is secured against all adversaries running in time less than t1 and it's rateable only in time t1. Now, what are these values t and t1? Let's go ahead and see what we should set them to in the security group. So first, let's see how the soundness works. So we have a message to work and the verifier now wants to ensure that he cannot be convinced of the false statement. So intuitively, we want to rely on the simplicity of the verifier's message, which is the OT message. So at some point in the soundness of the OT receiver's message, so let's say there is a challenger that gives this OT message to the verifier and the goal of the verifier is now interacting with the challenger to get this bit B-sharp. The verifier is just going to forward this directly to the malicious prove off and it gets back the prove a second non-message and now the goal of the verifier is that suppose the prove a bit convinced of a false statement, then the verifier should get B-sharp and this would give us a contradiction. So what is the verifier prove? He breaks these commitments, these two commitments by just brute force attack. You know, recall that it can be the verifier runs entirely. So he gets G0 and C1 by breaking the commitments and then he checks whether which of these two verifications are going to prove. Recall that since the statement X was not in the language, it cannot be the case that both of these conditions can simultaneously be the case that the prove is somehow iteratively. It can only be the case that the prove magically gets what B-sharp was and knew that hey, the verifier is going to ask me only if it would be 3, 8 or 0 or it would be equal to 1 and I'm going to program my values C0 and C1 such that I only care about Cp. I do not care about the other value because the verifier is not going to ever get that. So one of these things is going to be true and that gives the verifier a line to B-sharp. So now if the prove did manage to cheat him, the verifier did manage to 0, 8 or 0. In order to arrive at a contradiction notice that this time p, which is the running time of the verifier, should be less than e1, which is the security of the data sequence since I'll do it so far. So let's see what happens when we try to un-use even knowledge. Recall that the goal is to construct a simulator, a Super 100 simulator that is not more the witness and still should be able to simulate it. So how does the simulator work? It's very similar to how we try to argue the standard. So the simulator is not receiving the adversarial verified message, just takes the open protocol by a 2-forced attack. He learns the bit B and then now he knows whether he has to simulate C0 or C1. He just computes one of them correctly which he doesn't know the bit, that's when this should be fine. And then he picks the other one randomly because he knows that the verifier cannot learn the other one at all and now our goal is to show that the real world view isn't distinguished from the ideal world view. Now notice that at some point in our security loop, so we would go through a series of hybrids and at some point we would have a reduction to this commitment. So in the real world, C1-V was not picked randomly whereas in the ideal world this value C1-V is indeed picked at the uniformity of the random string. So we would need to ensure that this commitment sort of hides what C1-V is in order for security to go right. So at some point in the group you want to rely on the hiding of this commitment C. So how would that work? So the simulator would receive a message from the adversarial file. You would rate it, this would take time to keep on. And then you would talk to an external challenger and tell him that I would have an honest C1-V or I have a random C1-V and give me like a commitment of one of these. So he gets back the commitment externally and gives me one of these. And then he learns the rest of the protocol as we work. And now notice that the rest of the reduction is quite simple. So whatever the verify basis, where the verify basis is the commitment to be honest one or random one, the simulator makes the same guess. So why is this a contradiction? If you notice that this is only a contradiction under two conditions, first the verifier should guess correctly and also the running time of the simulator should be less than the time taken to break, less than the time against which the commitment is set here. That means that the running time of the simulator of just C1-V should be less than T, the green value which denotes the security of the commitment C. Now, recall that these two seem to give us a contradictory pair of statements because the soundness we needed T less than T1, whereas the zero-knowledge we need T greater than T1 and this seems to be a contradiction. So I don't have too much time to get into how exactly we would solve this but with a very, very brief sneak peek into the proof. We would require that we're not going to follow this kind of proof strategy to achieve zero-knowledge because clearly we can either cut T less than T1 or T greater than T1. So we're just going to have a soundness proof as before and we're going to have a new proof strategy to get zero-knowledge and the idea is that we don't have a non-miniform reduction for the commitment scheme. What do I mean? Instead of the simulator, recall that we said that the simulator is going to break this OT message to learn the bit B. Instead, we're going to assume that the simulator magically gets this value B. This could be like an auxiliary input that is given to the simulator while running the reduction. And the rest of the proof would happen as before. And now notice that the simulator's runtime is only polynomial because he does not have to break the OT message. This bit B is magically given to him and that can be done by non-miniform fixing argument and I'm not going to get into the details. And so this is just the polynomial time reduction and we know that the real proof scheme should be set here against polynomial time force series. We just choose the green value T to be little larger than polynomial. So some of the other challenges that we have is that how do you boost soundness? Typically we know that in zero-knowledge paddle repetition does not work but it turns out that in the case of SPS paddle repetition does work more specifically and I sort of skipped going into the details of the proof of soundness where we assume that we have a standard oblivion transfer protocol but we call that we only have oblivion transfer that has indistinguishable T-based security, a weak notion of OT and so that gives you some sort of issue about oblivion soundness. And I encourage you to look at the paper for the proof of the signature of oblivion for the protocol. These are the points from the media that I have on. You said that your result is related to J the result Yes, that is not correct. Could you explain briefly why? So the answer was they focused on the new motion of distinguishable dependence simulation so they get a three-round weak zero-knowledge distinguishable dependence simulation setting which is again incompatible to SPS and as a corollary again they also get a three-round weak zero-knowledge but their main focus is three-round weak zero-knowledge and three-round weak zero-knowledge So which is the weak three-round weak zero-knowledge and super-permanent I think it is incompatible but achieving protocols in the distinguishable dependence simulation I think are more difficult but as you can see our protocol is much simpler so I think maybe with more efficient I think we can but in terms of security I think it is probably incompatible Thank you A lot of comments and questions You have time If no questions or comments then it is time to speak again