 Okay, so I'll quickly start with secure computation. Let's say we have two parties with secret inputs who want to compute a joint function of the inputs. Then they can run the secure party computation protocol and at the end of which both parties will get the output. And the security will say that corrupted party learns no more than the output. This notion of not learning anything more than the output is captured by this real world idle world simulation based definition, which says that we have this idle world in which the parties do not talk to each other. But instead, there's a trusted party and both parties will send their input to this trusted party who computes the output and gives it to both the parties. It is clear that in this model, any corrupted party cannot learn anything beyond the output and this is what we want to capture in the real world. So we say that for any real world adversary attacking the protocol, there exists a corresponding adversary in the idle world which can launch the same attack. And this adversary is called the simulator. So what do we know about this setting? We know that every function can be securely computed just under standard assumptions. But this setting is too restrictive. It just talks about standalone setting in which just a single execution of the protocol is running in isolation. In reality, the situation is much more complex. There are many, many parties who are running many, many executions of this protocol talking arbitrarily. To make matters worse, any subset of these parties can be corrupt and we want to ensure security in this setting. So can we actually extend the beautiful results of the single execution to this highly complex concurrent setting? There has been a long list of work to understand and answer this fundamental question who work in various different settings. So before I go any forward, it is important to exactly tell you what model we will be working in. So we will focus on concurrent security in the plain model. That is without any trust assumptions like CRS, random molecule, et cetera. And using standard simulation based security definition. At this point, I would like to point out that they have a long list of work which relax either of these two conditions. That is, they either work with trust assumptions such as CRS, public infrastructure, tamper proof hardware, et cetera. Or they relax the security notion to something weaker such as input indistinguishable security, super polynomial time simulation or multiple ideal query model. I would like to emphasize that we do not work in any of these models and we focus on the plain model using standard simulation based security definition. Okay, so what do we know about this model? To begin with, I would like to point out a long list of impossibility results which rule out a large class of functionalities in this concurrent setting. These results are in fact very strong and they show explicit attacks which an adversary can launch in the real world which are impossible to do in the ideal world. So what do we know about the positive side? We just know of positive results for very special functionalities such as zero knowledge, single input setting and that's it. Or for restrictive classes such as bounded concurrency. I would like to point out that these positive results are very limited and moreover, all of these results can be obtained by just using black box simulation. So the central goal of this work is to develop new techniques to expand these class of special functionalities which can be realized in the concurrent setting. Towards this goal, let me begin by looking at this special functionality of concurrent blind signatures. Whether this functionality can be realized in the concurrent setting or not is open so far. And long time back, Lindl showed a black box impossibility for this functionality which said that no black box simulation techniques can give you concurrently secure blind signatures. And this was cited as a major motivation for relaxed notions of security such as game-based security and also using trust assumptions. It is clear from this impossibility result that any approach to concurrently secure blind signatures has to essentially rely on some kind of non-black box simulation technique. So the next question is obvious. Do we know of any such non-black box simulation technique in the fully concurrent setting? Until recently, the answer was no. Then in 2013, Goyal gave a new protocol for concurrent zero knowledge with a non-black box and straight line simulator. So in this work, we ask the following question. Does there exist a non-black box approach to secure concurrent computation? And since I'm giving this talk, it's obvious that the answer to this question is yes. So the next question is, how powerful is this approach? What all can we do from this approach? Can we expand the class of realizable functionalities? Can we actually get concurrent blind signatures? And also, can we get better protocols for the functionality which we already know? So the main technical contribution of our work is, we give the first protocol for concurrent secure computation which has a straight line non-black box simulator. And this technique helps us get a host of new results. So let me go back to the case of concurrent blind signatures and for this, we give the first protocol which realizes this in the plain model. Contrast this with the impossibility result which I just told you using black box simulation techniques. So this together with the previous result gives us the first natural example of a functionality which can be realized using non-black box simulation and is impossible with black box simulation. This blind signatures is a very special case of a general main result which is as follows. So Goyal gave what is known as the bounded pseudoentropic conjecture and we resolve this conjecture in the affirmative. More precisely, we gave a protocol for all functionalities which satisfy bounded pseudoentropic condition. What this condition is, I will tell you in a bit, but let's move on. So the round complexity of a protocol will just be a polynomial in the security parameter and we assume the existence of collision resistant hash functions and constant rounds semi-honest OT. Moreover, we subsume all the existing positive results for the concurrent computation which has zero knowledge and the single input setting. Finally, the last application of our technique is to improve the round complexity of existing protocols. The protocols which were given in Goyal 12 had round complexity polynomial in the security parameter as well as this parameter D which depended on the input length. We improve this to be just a polynomial in the security parameter and independent of the length of the on its party inputs. Moreover, it was known that any such improvement in the round complexity that is making it independent of the input length has to employ some non-black box simulation technique. So let me describe now what is this bounded pseudo entropy condition for which we obtain our positive result. At a very high level, here we try to understand the information which the adversary gets to learn in the ideal world. Very roughly, it says that total computational entropy of the information learned by the adversary across all the concurrent sessions via outputs from the trust and functionality is a priori bounded. Let me begin with a oversimplified version of this condition and I will fix it later. Okay, so in the ideal world for any functionality, let's say you have some input vector for the adversary. Then what this says is, regardless of the on its party inputs, the total number of possible output vectors which the adversary can get from the trusted functionality are bounded in number. Let me explain this further. So let's say we have these two parties and on its party and the adversary and I is the input vector of the adversary. Then what this says is, for this input vector I, there exists a fixed set S of I of bounded size, such that no matter what the on its party input vector is, the corresponding output will lie in this set and this is true for all possible on its party input vectors. Now let me tell you why this is not sufficient and we need to fix this later. So let us consider the example of pseudo random functions. Here the on its party holds a PRF key K and the adversary will in each session query this PRF on different inputs of its choice. That is in the ideal world, it gets to learn the output of the PRF on inputs of its choice. It seems like since this key is a bounded size, that is lambda bits, they can be at most with the by lambda possible output vectors. So this satisfies the boundaries of the entropy condition which I told you before. But this functionality is impossible to realize in the plane model. So we need to make it stronger and the way we make it stronger is that we add the condition that along with this output vectors being bounded in number, we also want them to be efficiently testable. What does this mean? This means that the set S is a bounded size and there exists this efficient algorithm T which will accept everything which is inside the set and reject everything else. These two conditions together at a high level can be thought of as the adversary learning only bounded amount of pseudo entropy in the ideal world. To make things clearer, let me tell you a few examples which satisfy this bounded pseudo entropy condition. Let me begin with the simplest case of concrete zero knowledge. In this, the adversary is a very fire and holds a list of NP statements and the prover holds the NP statements along with its witnesses. And the ideal functionality just takes the witnesses and tells the output to the adversary. So since the prover is honest, in each session, the adversary just learns the bit one. So here intuitively, the adversary is not learning much. So it seems to satisfy the bounded pseudo entropy condition and in fact it does. Adversary can learn a unique output vector which is just the all ones vector and this is also efficiently testable. Now moving on to a bit more complicated example is that of bounded concurrency. In this, there is a prior rebound on the number of sessions which the adversary can run in the ideal world. So let's say the input vectors of the adversary is y one up till y n where n is bounded and it can learn all these outputs in these sessions. So since the adversary learns only a bounded amount of outputs, it is clear that he can only learn a bounded amount of information. So all possible output vectors are all vectors of this bounded length n and this is clearly efficiently testable. So both of these examples of zero knowledge and bounded concurrency could also be achieved before but let's now look at a new example which is of concurrent blind signatures which we did not know before. In this, we have a signer and a user. Signer holds the signing key and the user has a message and in the ideal world, the ideal functionality just computes a signature and gives it to the user and the signer has no output. It is clear that this satisfies blindness since signer does not see the message to be signed and if the underlying scheme is un-forgible, it also has un-forgibility property. That is, the user cannot fake any signatures. Okay, so in the concurrent setting, the user can learn signatures on many messages of its choice. So here clearly, the adversary can learn unbounded amount of information because the number of sessions are unbounded and hence he gets to learn signatures of unbounded number of messages and it is not clear why this bound should want to be conditioned to be satisfied. What we show is if the underlying signature scheme is unique, then bound is going to be conditioned or BPC is satisfied. By uniqueness, I mean that given the public key, for any message, there is just a unique signature which verifies. So though the adversary gets to learn unbounded number of outputs, each output is information theoretically fixed by the input and these output vectors are also testable by just using the verification algorithm of the signature scheme. So this shows why concurrent bind signatures satisfies the boundless pseudoentropy condition and speaking at a high level, any functionality in the ideal world which does not try to add too much pseudoentropy into the outputs should be realizable by our condition. Here's a summary of our results. So we give a new non-blackbox technique for concurrent secure computation which lets us get a positive result for all functionalities with satisfy boundless pseudoentropy condition and this has many applications such as realizing new functionalities which were not known before. Blind signatures is one such functionality which I just told you, another one is this verifiable random functions and some more and the other one is to improve the parameters for existing functionalities such as the round complexity for the functionalities in the single input setting. In the rest of the talk, I will just focus on two-party secure computation in the concurrent setting and here the adversary can run an unbounded number of sessions with the honest party and it also controls the scheduling of the messages. So the first question is, okay, sorry, the roadmap. I'll just begin by describing the challenges in the concurrent simulation followed by how we resolve these challenges using our bounded pseudoentropy condition. So the first question is, how do we design a concurrently secure protocol? A standard approach is to apply the GMW paradigm which is that you start with some semi-onus secure protocol and try to somehow compile it with the appropriate kind of concurrent zero knowledge along with each message and prove it security. To prove security of such a protocol, you in fact need what is known as non-malable concurrent ZK or simulation sound concurrent ZK. So what is this? It says that the proofs given by the adversary should remain sound even when it is given many different copies of a simulated proofs by the simulator. In our work, what we need is a simulation sound concurrent ZK with a straight line simulation technique. And the only candidate known which works in the fully concurrent setting is that of concurrent zero knowledge by Goel. The first challenge is to construct a simulation sound version of Goel's concurrent zero knowledge. In our work, we show that if we combine the non-blackbox technique of Goel along with robust concurrent non-malable commitments, we can get a simulation sound concurrent ZK with non-blackbox simulation which has a straight line simulation which is very critical for the rest of our work. I will not go into the third details of this construction. You can look at the paper for that. Let me move on. So going back to the GMW paradigm, we wanted to compile a semi-honest protocol with the ZK protocol. So earlier, all the known protocols for simulation sound concurrent zero knowledge which we need were blackbox and had a Rewinding-based simulator. This Rewinding was shown to be the major bottleneck to get concurrent MPC. Now what we have is, we have a simulation sound concurrent ZK with a straight line simulation. That is, there is just no Rewinding. So are we already done? That is, can we get concurrent MPC for actually all functionalities? But this is clearly impossible from the long list of impossibility results which we already know. So what are we missing? To understand this, we need to go deeper into how these non-blackbox simulation techniques work. Speaking at a very high level, this is what happens. We have a simulator talking with the adversary who's trying to give, so the adversary is proving some statement to the simulator. And there's also this ideal functionality which the simulator talks to to get outputs and other things. So adversary, so simulator begins by sending a commitment of a machine to the adversary. Followed by some other messages or other sessions. In between, it talks to the ideal functionality to get some outputs and continues the simulation at the end of which, the adversary sends a long random challenge R. The goal of the simulator is to somehow commit to a machine M in the beginning itself which is able to regenerate this whole transcript or can predict R. So how do we do this? Does committing to the code of the simulator as well as the adversary suffice? The answer is no. Because to be able to regenerate this transcript, you also need the information which you learn from the ideal functionality. But the code of the ideal functionality is not available and it cannot commit to that code. Note that this does not prove to be a problem in the case of zero knowledge because there is no ideal functionality and hence the simulator is able to complete the simulation by just committing to the code of itself and the adversary. So this is the problem. How do we commit to a code which can regenerate the transcript? The idea is we need to somehow communicate the information learned from the ideal functionality through inputs to the machine M, okay? But now we have another problem. The number of sessions is unbounded since we are in the fully concurrent setting. So the amount, so the length of the input which needs to be passed on is of unbounded length. So can we allow any inputs of unbounded length? The answer is no. We cannot allow arbitrary inputs of unbounded length as this would break soundness. Why is that? An adversarial prover later on can encode this random challenge R from an honest verifier into the input to the machine M and really break soundness. So what do we do? So here our bounded pseudoentropy condition comes to our rescue which said that outputs learned by the adversary have bounded pseudoentropy. So though this output vector might be of unbounded length since the number of sessions are unbounded, the total amount of information or the entropy in this output vector is bounded. So what we allow is, we allow inputs of unbounded length but only of bounded pseudoentropy which is ensured by the testability condition I told you earlier. And this would preserve soundness. This is because now an adversarial prover cannot encode this long random challenge which has very high entropy into something which has bounded pseudoentropy. So the high level the idea is that during simulation we need to communicate this information learned from the trusted functionality to this machine M via inputs. Bounded pseudoentropy condition tells us that this information can only have bounded pseudoentropy and hence we can pass in output vector of unbounded length still preserving soundness. We need some more ideas to make whole this thing work such as the oracle techniques from Deng, Goel and Sahay and you can look at the paper for more details. Finally, the conclusion. So in this work we show a new non-black box technique for secure concurrent computation and then we achieve a positive result for all functionalities with satisfied bounded pseudoentropy condition. And this has us realize new functionalities and also improve the parameters of existing functionalities. Thank you.