 OK. So this is joint work with Sanjay Mgarg, Pippul Goyal, and I'm at Sahay. So since this work is about concurrently secure computation, let's start by reviewing the definition of secure computation. So the basic ideas as follows. We have two parties holding private inputs, x and y. And they want to jointly compute some function over the inputs. So they can run a protocol pi and jointly compute this function. And the security guarantee is that even if one of these parties is corrupted, then it does not learn anything more than the function output. And so what this means is, essentially, the parties could have simply send their inputs to a trusted oracle who computes the function output and then sends it back to the parties. So secure computation we know is a great primitive in cryptography and has found various applications. One important point that will be crucial for this talk is that the original definition of secure computation only guarantees security when we run the protocol in isolation. That is, it only guarantees standalone security of protocols. In reality, things are different. Today, we run protocols on the network environments, such as the internet. And in such a scenario, various parties across different protocol executions may be corrupted. And they may be able to mount coordinated attacks on the honest parties. So really what we want is security under concurrent composition. So essentially what we want is that when we run protocols, even concurrently with other protocols, still they remain secure. To be more concrete, when we talk about security of multiple executions of the same protocol, then we say security under concurrent self-composition. On the other hand, when we talk about security of arbitrary protocols being executed concurrently, then we call it concurrent general composition. And the UC framework of Kennedy is a specific formulation for capturing security under concurrent general composition. So the key point is actually standalone security does not imply security under concurrent composition. OK, so let's start by reviewing some of the prior work in the area of concurrent security because that is the focus of this talk. So unfortunately, when it comes to designing concurrently secure protocols, things are not really that great. But if we are actually willing to make some global trust assumptions, for example, like trusting a party to issue an honestly chosen common reference string, then we can actually get general positive results. And there is a long line of work dealing with this. However, do we really want to trust parties? Indeed, a driving goal in cryptography is to eliminate the need of trusting other parties. And therefore, ideally, we would like to design protocols in the plain model without trusting other parties. But when it comes to designing protocols in the plain model, by and far, most of the results in this area have been negative. Indeed, starting with the impossibility of achieving UC security, there has been a long line of impossibility results, ruling out general composition, and even security under weaker notions of concurrent self-composition in various settings, such as adaptive inputs, and then using fixed inputs. And so on and so forth. So things might look really, really pretty hopeless and really bleak. But let me try to convince you that actually things are not as bad as it may seem. So in fact, we can achieve relaxed notions of security in the plain model, which are quite meaningful in various application scenarios. So one such popular notion that will be interested in this work is the notion of super polynomial time simulation. And the name kind of already gives away what it means, but I'll discuss this notion in a bit more detail briefly. Another notion that has been studied is that of input indistinguishable computation, where very crudely speaking, you can think of it as an analog to witness indistinguishability in the context of secure computation. And in this work, we'll be building new techniques for obtaining interesting results under both of these frameworks. But for concreteness and for simplicity in this talk, I'll only focus on security under super polynomial time simulation. So let's try to understand what is security under super polynomial time simulation. The motivation is as follows. For various real world applications, for example, voting, privacy, preserving data mining, et cetera, the ideal world security for these applications is actually statistical or even information theoretic. So in these cases, the running time of the simulator actually does not matter. So then the idea is that, unlike the standard definition of secure computation, where the simulator must run in polynomial time, in the case of SBA security, the simulator is actually allowed to run in super polynomial time. So the effective security guarantee that we get is that any real world attack can be translated to the ideal world, but in super polynomial time. So of course, ideally we would like to get polynomial time security, but as I just mentioned earlier, in general, this is impossible. So to better motivate the exact question that we are interested in this work, let me try to review some of the prior work in this area. So perhaps unsurprisingly, the initial work in this area relied on some assumptions, security assumptions, against super polynomial time adversaries in order to make the constructions. But recently, in a very, very beautiful work, Kennedy-Lynne-Pass showed that surprisingly, actually, such assumptions are not necessary. And in fact, we can use only standard polynomial time assumptions to construct protocols achieving security with respect to super polynomial simulation. The main drawback, however, of the result is that it requires a large polynomial number of rounds. And if we look closely, to some extent, large round complexity seems to be somewhat in event to the approach. So the question that we ask in this work is if we can construct constant non-protocols that achieve SPF security by relying on only standard polynomial time assumptions. And we, of course, answer this question in the affirmative. So let me summarize our results. Our main contribution is a new black box simulation technique that allows us to get constant round SPF secure protocols, in fact, secure in the UC framework, and we only rely on standard polynomial time assumptions. And we also give a new simulation-based definition of input indistinguishability that is cleaner than the original definition of Macaulay-Pass and Rosen, and also captures some more cases in particular, allows us to capture randomized functionalities, which was not the case earlier. And as another application of our black box simulation technique, we are actually able to show that our protocol, the SPF secure protocol, is also secure with respect to our new definition of input indistinguishability. OK, so in this talk, I will only focus on the black box simulation technique. So let's start by trying to understand why did the initial works require some assumptions with respect to super polynomial time security assumptions. Why did they require that? So let's look at kind of a protocol template that these works used. So the idea is that, at some point during the protocol, there will be a phase, a so-called trapdoor phase, where the simulator can actually run in super polynomial time to get a trapdoor. And then once he has this trapdoor, he can actually simulate the remainder of the protocol in a straight line manner, without any complications. So how do we prove security? Well, let's try to construct hybrid arguments. So the idea is that, in the beginning, we'll start by running a super polynomial time and get this trapdoor. And then once we have this trapdoor, we can go on and make various changes in the protocol until we finally arrive at the simulator. But note that, as soon as we start running in super polynomial time in the initial hybrid, in the later hybrids, we cannot rely on polynomial time assumptions anymore, because these hybrids are inefficient. They're already running in super polynomial time. So we can trivially break polynomial time assumptions, and we don't get any contradiction. So indeed, this is why the prior works require super polynomial time assumptions to construct the protocols. So let me now describe a basic high-level approach that we can use to actually move from using super polynomial time assumptions to using only standard polynomial time assumptions. So the idea that follows, instead of constructing the hybrid arguments in this manner, what we'll do is we'll take this initial step where we run in super polynomial time and move it, essentially, to the bottom. What we are saying is, what we are doing is now, we will run in super polynomial time only at the very end of the hybrid experiment, and this last step will be our simulator. So now everything else before this would be polynomial time. But it seems a little bizarre, because in order to actually execute these previous set of hybrids, we would need the trapdoor. So how do we get the trapdoor? So we turn to a good friend, Rewinding. So now what we'll do is, in the beginning, we will do Rewinding to actually get this trapdoor, and this Rewinding will be done in polynomial time. Now once we have this trapdoor, we can execute these set of hybrids in the same manner as before. But the crucial difference is that now all these hybrids will be in polynomial time. So actually now in these set of hybrids, we can rely on polynomial time assumptions. And at the very end, in the final step only, we run in super polynomial time. And in the final step, we do not do any Rewinding, we just run in super polynomial time to get the trapdoor. The same trapdoor that we were getting earlier by Rewinding. So this is the basic approach. So it makes sense to question the feasibility of this approach. So we know from experience that our good friend Rewinding is not really such a good friend when we go to the concurrent setting. Indeed, we know that Rewinding in concurrent setting can be quite complex. And indeed, the work of Canaline Pass uses pretty sophisticated Rewinding techniques to get there to prove security of their protocol. We will also use Rewinding. But we'll use Rewinding in a somewhat different way that will help us to actually get constant rounds, unlike the previous result. But it makes sense to, again, stop for a second and look at this approach. Does it even make sense? If we are using Rewinding, and if the Rewinding works, which is better for the proof to go through, then why do we need the super polynomial time to step at the very end? No, why not just get rid of it and stop at the penultimate hybrid and be done? That can be the final simulator. So let me try to, again, rule home the point that the approach does make sense. And the main point is that we are, again, doing Rewinding only in the intermediate hybrids. The final simulator, where we learn super polynomial time, we do not do any Rewinding. And in fact, it better not do any Rewinding because we are trying to get UC security where Rewinding is not allowed. So the hope is that because we do Rewinding only in this intermediate set of hybrids, we can somehow leverage this fact to get security in constant rounds, which is typically not possible in the concurrent setting. So at this point, let's try to review some of the main challenges or the vagaries of Rewinding in the concurrent setting. So to be concrete, let's think of concurrent executions of a protocol between a simulator and the adversary. And remember that in the concurrent setting, the adversary is the one who controls the scheduling of messages. So let's look at a specific scheduling. So here, blue arrows denote the outer session, and the inner session is being denoted by orange. And for simplicity, I'm just thinking of concurrent self-composition. It suffices to drive home the challenges. So now at some point, the simulator, the Rewinding simulator, may actually try to Rewind the outer session. So when he Rewinds the outer session, the inner session may also get Rewind. But furthermore, the simulator, in order to complete this Rewound execution thread, he may actually need to do recursive Rewinding in this inner session. So essentially, the simulator may keep on have to do Rewinding. And typically, recursive Rewinding is very problematic and requires a large number of rounds. And in fact, if you think of the case of Black Box and 20-0 knowledge, then there is a lower bound of logarithmic rounds. So somehow, we need to get past this, because we actually want to get constant rounds. Another related problem that comes up is that the adversary may actually start using different inputs in different instances of a protocol. So when we are running this orange protocol on the main execution thread, he may use some input y. And when we do Rewinding, he may suddenly use some different input y prime. And the simulator will have to fetch the output corresponding to both of these inputs from the idle functionality. So essentially, the simulator may end up needing multiple queries to the idle functionality, which is obviously not allowed in the security definition. So let me now describe our key insights, how we are able to go past these problems. So the starting point is that, again, we are doing Rewinding only for extraction purposes and not for general simulation. So if we look at the set of hybrid that we discussed earlier, that will be a proof approach. If we look at the final two sets of hybrids, the only difference between the final Rewinding-based hybrid, which is hybrid 4 here, and our final simulator, which is hybrid 5 here, is the manner in which we do extraction. In every other way, they are essentially the same. It's just that in hybrid 4, we do Rewinding to extract some trapdoor. And in hybrid 5, we actually run in superpolynamal time to do extraction. So now we only care that somehow the extraction works. So of course, the SPA simulator always extracts by definition. So now we just need to make sure that somehow the Rewinding-based extraction also works. And we want to do so in constant rounds. So the key idea, in some sense, is that since we are doing Rewinding only in the hybrid experiments, we actually have access to honest parties inputs. And we will use these honest parties inputs on the Rewound execution threads. And in general, this is not good because we don't have honest parties inputs. But once again, since we are doing Rewinding only in the hybrids, we can use these honest parties inputs. And this is perfectly legitimate because our final simulator does not Rewind. So since we only care about the indistinguishability of the output of these experiments and we only output the main thread, it is perfectly fine to use the honest parties inputs when we do Rewindings. And now this actually pretty much takes care of the problems that I mentioned earlier. Because we have honest parties inputs during the Rewindings, we don't need to query the idle functionality anymore multiple times. We can just compute the function outputs on our own. And moreover, now we can simply behave honestly on the Rewound execution threads. And we will not need to do any recursive Rewinding as well. So pretty much we can take any simple three round extraction protocol and use that to do Rewinding. And this will help us get constant rounds. So that's really the key idea here. When we try to implement it, there are some issues that come up. So in the last few minutes, let me just try to explain those issues. So what is our strategy again? We want to use honest parties inputs during the Rewindings. But note that at the same time, on the main execution thread, we do not want to use honest parties inputs. So it's like when I Rewind, I use honest parties inputs. But on the main thread, I don't use honest parties inputs. So this may be kind of problematic. Because typically, a Rewinding thread shares some kind of prefix of messages with the main thread. And now if we already start cheating on the main thread and then Rewind, then we may not be able to suddenly behave honestly. We may be committed to already cheating. And we will have to continue cheating. And if we continue cheating, maybe we'll have to do recursive Rewinding again. So let me try to explain this by an example. So let's say we have a simple protocol where at some point, the honest party commits to, let's say, 0. And then tries to prove that in the Rewind knowledge proof that it actually did commit to 0. And now we are trying to construct a Rewinding simulator. So what happens is that maybe the simulator for the purposes of simulation has to commit to 1. And now if it does Rewinding, then there is a problem. Because now the simulator will actually need to cheat in the 0 knowledge proof. I mean, he already committed to 1. So since he's already cheating, he cannot suddenly behave honestly in the 0 knowledge proof because the statement is false. So he has to cheat. But in order to cheat, he may actually need to do recursive Rewinding. And like I mentioned earlier, that is the one main thing we want to avoid because we want to get constant rounds. So our solution is essentially as follows. Very roughly, we want to design a protocol that allows a Rewinding simulator or a Rewinding hybrid to extract a trapdoor from the adversary before it starts to cheat in any session. So in particular, if we think of a session, let's say session i that appears somewhere in the protocol, then the idea is that if we already started cheating in session i when it appeared on the main thread of execution, then it means that we have already extracted a trapdoor. And now if we do any further Rewinding, then since we already have the trapdoor, we can continue to cheat using the trapdoor in a straight line manner. That is no recursive Rewinding anymore. On the other hand, if we were behaving honestly so far in session i, then if we do Rewinding in the future, then I can simply use the honest party inputs to behave honestly. And again, no recursive Rewinding anymore. So that's really kind of how we design the protocol. And we are able to implement our strategy where we cheat on the main thread, but still somehow sometimes behave honestly on the Rewindings and proof security. So the actual proof strategy is a bit more involved, as you may expect, but I'll skip the details. So let me conclude with some open problems. So in this work, we actually only consider the standard, the original security definition of super polynomial time simulation. There is a stronger notion called angel-based security, which provides better composition guarantees. And this was introduced by Prabhakar and Sahai. And our solution does not seem to extend immediately to under this framework, so it would be interesting if we can get constant on protocols using only standard assumptions, even in this model. Of course, it's also very interesting. It would be interesting to explore other notions of concurrent security, because if you really think about it, the existing definitions of concurrent security do not really seem to precisely capture what information must be lost in the concurrent setting. And so at least I think that perhaps the right definition of concurrent security is still out there to be discovered. So with that, I'll conclude. Thank you. Any questions? OK, so let's thank the speaker again.