 Thanks, Anjum. So this is joint work with Run, Kennedy, and Whipple-Goyle. So since there have already been two talks on MPC and on concurrent security, so I'm just going to skip all the basics of those things and get right into it. So our setting is that of protocol composition. That is, we ask the question, what if a protocol is executed concurrently with many copies of the same protocol or arbitrary other protocols, and what if all these protocols are controlled by a central adversary, this adversary in the middle? And we ask, in this demanding setting, is the protocol security still preserved? That is, just, can we achieve protocol composition? And for simplicity, or actually for the focus of this talk, we are just going to worry about protocol self-composition. That is, we will consider the scenario where unbounded many copies of the same protocol are being executed. And again, they're all being controlled by a same central adversary. We'll just focus on the two-party setting between parties P1 and P2, and the adversary in the middle can run unbounded number of executions of this protocol, playing the role of P1 and P2 interchangeably. And again, we ask the same question, can we achieve security in this demanding setting? While this is weaker than the standard notion of general composition, this is still quite meaningful in many client-server settings. And indeed, there are many examples of this that arise. For our interest, we will be concerned with concurrent multi-party computation. But the same problem, in fact, arises in many weaker scenarios as well. For example, concurrent zero-knowledge, non-malable encryption, or even commitments, and password authenticated key exchange. So this is a very general problem, and the question is, can we achieve concurrent self-composition securely? And unfortunately, the answer is no, there are broad impossibility results that show that there exists a large class of functionalities that cannot be securely realized under concurrent self-composition. As we saw in the previous talk, there are some special cases in which this impossibility result can be bypassed. But if we are concerned with generality that is achieving security for all functions, then this impossibility results hold. So what can we do to circumvent these impossibility results? So two lines of research have emerged. The first one concerns with the use of trusted setup assumptions, that we assume that there is some trusted party who can generate for us, for example, a common random string or unclonable functions, or tamper proof hardware, and so on and so forth. And then these works use these trust assumptions to realize secure protocols. And in fact, not only do they achieve concurrent self-composition, but in fact, they can also achieve UC security, the gold standard. The second line of research, which is what we are interested in, is essentially that of relaxing the security definition. Instead of demanding the standard real ideal security for MPC, we relax the security requirements by usually allowing some additional leeway to the simulator, or requiring indistinguishability instead of simulation, and so on. One of the most common definitions in this line of work is that of super polynomial time simulation, where instead of requiring the simulator to be polynomial time, as the definition suggests, we require it to be super polynomial time. And this clearly relaxes the security, but it's still quite meaningful, and there has been a long line of work in this study. But really the main question that we ask when it comes to relaxing security definitions is that what security are we really losing, right? When we are relaxing the definition, can we precisely quantify what security we are losing by relaxing this definition due to concurrent attacks? And if you think of definitions like super polynomial time simulation, it's not always clear how to precisely quantify this information, okay? So what we'll be studying is this notion of multiple ideal query model, which was introduced in joint work with Whipple and Rafi a few years ago at Crypto. So the definition kind of looks like the standard real ideal paradigm, where there is a simulator in the ideal world who can make output queries to the trusted party, except that now the simulator has a reset button. So the simulator can now reset the ideal functionality and make more than one output query on different inputs of its choice, and all of the output that it gets from the trusted party are with respect to the same fixed input of the honest party. And when I say fixed input, it's fixed for a particular session, okay? And the same experiment can be repeated for each session. And now we can parameterize this definition by considering a parameter K, and we say that this protocol has a query complexity K if we can construct a simulator that makes K output queries to the ideal functionality in each session. Okay? So now it's easy to see that with this definition, we have a very easy way to quantify the security loss, right? If we have query complexity K, it means that we are leaking at most K outputs, and therefore the security loss is K outputs in each session. And to look at the meaningfulness of this definition, we can consider this function F, which is fixed with input X of the honest party in a session, and the remaining security in that particular session is essentially the level of unlearnability of this function for after K queries, where K is the query complexity, right? So this is quite meaningful, even when K is polynomial, this makes sense for unlearnable functions, but of course our goal will be to minimize this K as much as possible, and we will discuss this later on. And as a special case of this primitive or this notion, when K is constant, we get password-based key exchange as per the definition of Goldback and Lindel. I'll come back to this later on when we mention our results. Okay, so as it should be obvious, the main goal in this line of work or in this notion of multiple ideal query is to minimize the query complexity. The smaller the K, the better the security loss, right? Or the less of the security loss, right? So let's look at the state of the art in this work. Let's first discuss lower bounds, okay? So in this context, it was shown by Goyal and myself in your crypt a couple of years ago that it is impossible to achieve query complexity of constant, okay? That is, there exist functionalities that cannot be securely computed in the MIQ model with query complexity constant. And one point that I want to remark here is that this result rules out query complexity of constant where the constant has to be a universal constant. That is, it must be fixed a priori, the protocol design, and then we require that any simulator for this protocol should only make at most these fixed K queries, okay? So it rules out a universal constant. In terms of positive results, there have been two prior works. The first with Goyal and Rafi and the second one with Wipple and Divya, where we constructed secure protocols, assuming oblivious transfer, where the average query complexity in each session was a constant. Let me clarify what I mean by average. So really the broad guarantee here was that if there are n sessions where n could be any arbitrary polynomial in the security parameter, then the total number of output queries made by the simulator in the other world would be a constant times this number of sessions n, okay? So therefore what this broad guarantee gives you is that on an average in any session, the total number of queries will be a constant, but in the worst case, the total number of queries in any particular session could be arbitrary polynomial in particular proportional to the number of sessions. And this is really bad because it means that in some sessions we may completely compromise, we may completely lose all security, all right? So what we want is essentially to achieve a worst case guarantee, okay? That is, we ask, what is the best possible worst case query complexity that we can achieve in this model? And what we show is that assuming oblivious transfer, we can construct secure protocols where the worst case query complexity in every session is only a constant, okay? So this may sound funny in light of the negative result that I mentioned earlier, and really the way we circumvent the lower bound is by achieving this constant, or by making this constant to be adversary dependent. What I mean is that if the number of sessions is let's say n to the c where n is a security parameter and c is some constant, then the number of idle queries in every session would be roughly two to the c, okay? So which is still a constant. And this matches the lower bound from Yorkev-2013 where adversary independent constants or universal constants were ruled out, okay? So this essentially settles the main problem in this area, and a nice feature is that the protocol is actually the same as in previous works, and what we do is essentially give a new analysis of these protocols. And as a corollary of this main result, we get a concurrent secure password-authenticated key exchange protocol in the plain model, that is a standard model, no set up assumptions, no random oracles, nothing. And this is the first search protocol that is known. And to compare with the definition of Goldweig and Lindel from 2001, what we essentially get is that for end to the c sessions and a password dictionary d, the security of exchange keys as per their definition is essentially two to the c divided by the size of the dictionary. And this can be seen as the amount of guesses that the number of password guesses that the adversary can make. Okay, so let's jump into the technical details. So the first question that we should ask is why is concurrent security even hard, right? Can we just apply the GMW paradigm in the concurrent setting? We saw this kind of slide in the previous talk, but this will be slightly different here. So let's say we start with a semi-onness protocol, and since we are in the concurrent setting, we will, instead of using standard zero knowledge, we are going to use a concurrent zero knowledge protocol. And indeed, there are many concurrent zero knowledge protocols known in the literature. Since we are in the concurrent setting, we also have to worry about non-malability. So let's throw in non-malability as well. And in fact, we also have concurrent non-malability zero knowledge protocols in the literature. So the question is, why doesn't this already give concurrent secure computation? And to understand this, let's go back and figure out how do simulators typically work, right? So simulators work in the following manner. The main job of the simulator is essentially to extract the adversary's input. And usually it achieves this by rewinding. Of course, there is also non-blackbox simulation to consider, but let's ignore it for this talk, right? We will only consider blackbox simulation. And now, essentially, since we are in the concurrent setting, the simulator must extract adversary's input in all of the sessions, right? That's its main job. The rest is usually easy. It can be taken care of by standard techniques. So let's just worry about extracting adversary's input in all sessions. Okay, so now let's kind of revisit the core problem of concurrent simulation. And this observation comes from Linda. So let's consider the scenario where the simulator is trying to extract the adversary's input in a bunch of concurrent sessions. And since the adversary controls the scheduling of messages, let's consider rather complicated scheduling of messages across different protocols. So here, the blue arrows denote an outer session and the orange boxes denote inner sessions. Okay, so the inner sessions are interleaved inside the messages of the outer protocol. And now the simulator's goal is to extract the adversary's inputs in all of these sessions. So let's say he tries to extract adversary's input by rewinding the outer blue sessions somehow. Okay, so let's say it's rewinds in the top part of the protocol and sends a new message. So note that when it does so, a new copy of the inner orange session will be executed by the adversary, right? And in particular, the adversary may in fact even change his input in this around execution of the inner session. So in the original copy of the orange session, he may use an input Y and in this rebound execution, he may use an input Y prime, which is different. And now in order to complete this rewinding, the simulator should obtain the output for this Y prime as well. And if you think about standard MBC, there the simulator only has one output query, so he kind of gets stuck here. But since I already mentioned to you the MIQ definition, clearly MIQ can help us in this setting, right? In this notion, the simulator is in fact allowed to make multiple output queries, so therefore he should be able to simply reset the idle functionality and take care of this problem, right? By getting multiple outputs. And indeed this observation already shows that the GMW paradigm using known concurrent zero knowledge simulators or known concurrent zero knowledge protocols already use a positive result in this setting, but with the caveat that the query complexity is some large polynomial, okay? And this is quite bad. We want to minimize this query complexity, okay? So how do we improve the query complexity? So if we look back, really the main problem with the main problem, why we get a large query complexity is because these concurrent zero knowledge simulators rewind a lot, okay? They rewind many, many, many, many times, large some large polynomial, and therefore we get a large number of large query complexity, okay? And with this view, the goal is to minimize the number of rewinds. If we minimize the number of rewinds, then we'll minimize the query complexity, okay? So it sounds simple, right? I mean, let's just rewind fewer times. What's the big deal, right? So, well, it turns out to be a big deal because it's not like these works on concurrent zero knowledge. They were not rewinding many times for just for fun, right? There was a reason behind it, and so let's try to understand this reason, right? Why did they rewind so many times? So again, I'm going to now consider simulator and adversary, but now the simulator will play the role of the prover and the adversary will play the role of the verifier, okay? And in fact, I'm going to use the same nested interleaving of sessions that I discussed earlier, okay? So here's the interleaving of sessions, right? Now, of course, these sessions correspond to a zero knowledge protocol, but that doesn't matter for us. The details don't matter. And the main challenge in this setting is for the simulator to simulate the adversary's view or the verifier's view in polynomial time, okay? So again, the simulator since it can only simulate by using rewinding in our setting. So therefore, let's say it tries to rewind the outer session somehow by sending some new message. And again, when it does so, the inner orange session will be executed again, okay? Here, we don't care about the outputs. In zero knowledge, the outputs are simple, right? It's just one. So we don't care about the output, but the problem is that when this rewinding happens, the simulator is forced to simulate this inner orange session two times, okay? It may have already done a lot of work in simulating this orange session on the main execution, but when it rewinds the outer session, it is forced to re-simulate the orange session inside, okay? And now, if you consider some adversarial nesting of sessions that can quickly lead to exponential simulation time, okay? So this is really the main challenge in concurrent simulation. And looking at this picture, this looks very similar to the problem of multiple outputs that we discussed earlier, right? And in fact, what I'm going to show you later is that these are essentially the same problem. So the way this problem of exponential time is solved is by using sophisticated rewinding strategies which involve recursion. So the basic idea is as follows. Let's say here we have a concurrent schedule of many sessions, okay? The different colors denote different sessions. The idea is this. The simulator takes this entire transcript of all the protocols, okay? It first divides it into two parts. So when it divides into two parts, we say that it's like using a spreading factor of two, okay? This will be important for us later on. After dividing this transcript into two parts, it rewinds each of these parts independently, okay? And now we just recurse. We take each of these two parts, we divide them into two further parts and rewind each of them independently again. And we keep going until we reach the point where each of the parts is just of length one. That is, it just consists of one pair of messages. Okay, at that point we stop. I'm not going to argue why these rewinding strategies work. That's, you know, for another day. But you can trust me, or in fact, you can trust all of these people that these rewinding strategies really work. So now let's come back to the MIQ model, right? Let's come back to the concurrent secure computation setting. And here really the only difference is that now we will be concerned especially about these output messages. So recall our goal was to minimize the query complexity, which in turn translates to the number of times the output messages appear in the protocol transcript. Okay, or rather in the simulation transcript. So therefore we will be concerned about minimizing these output messages during the simulation. And these output messages are denoted in these bold arrows. So now if we apply these rewinding strategies, you know, you can look at this picture, we are rewinding so many times, clearly we are going to get a large query complexity. And what we really want is essentially to do this. We want to throw away some of the rewindings and only rewind very frugally, that is just sufficient for extracting all the adversaries inputs. And still, and then we essentially will get small query complexity. So this problem can be essentially broadly studied as the problem of precise simulation, which was started by Mikhalian pass in the standalone setting, and then later by Parnari et al in the concurrent setting. So this problem says the following. It says that we want to minimize the overhead of simulation. So what do I mean? Let's think of running time of simulator. So typically when we think of the running time of the simulator, we say that it could be some arbitrary polynomial in the running time of the adversary, okay? But here in the case of precise simulation, we say that a simulator is precise if the running time of the simulator is only a constant overhead, has only a constant overhead in the running time of the adversary, okay? And now we can essentially map this to our problem, to a problem of query complexity. So essentially now we can consider a similar precision problem for number of outputs, where in the real world, the number of outputs seen by the adversary are n, where n is a number of sessions, so one for every session. And now we want that in the idle world, the number of outputs that the simulator should learn is constant times n, okay? So it's really the same precision problem. And in fact, it was shown by Gwell et al in 2010, that you can translate the precision in run time to precision in the query complexity, okay? So these are really the same problems, great. So that is nice. In fact, we do now know reminding strategies which achieve this precision. So what's the problem? So the problem is that these precise, when we take this view of precise simulation, it only gives us global precision, okay? We're only saying that if the number of outputs in the real world are n, then the total number of outputs in the idle world are constant times n, okay? So this global precision will only give an average case guarantee on the query complexity of the simulator, whereas what we want is a worst case guarantee. So we need to do something new. So at a high level, our strategy is going to be this. Essentially, as we already saw, the query complexity of the simulator in any given session, let's say i, is essentially the number of times the output message of the session appears in the simulation transcript. It obviously appears more than once because we are rewinding. And now we want to simply count how many times this output message appears in the simulation transcript. So what we do is essentially a new combinatorial analysis of the rewinding strategy of Gwell et al from 2013. And with this new analysis, we are able to show that essentially their simulator already has a constant worst case query complexity. So our strategy will essentially proceed in three steps. So in the first step, in the first step, we are going to consider a restricted class of adversaries that we call static adversaries. So static adversaries are ones which do not change their behavior upon being rebound. That if I rewind the adversary, it's going to send me the same messages that it sent to me on the main execution. And in this case, we are able to show that if you take existing rewinding strategies like the one of Kilian Petronk or Prabhakar Natal, where the splitting factor for recursion is n, where n is a security parameter, then for these rewinding strategies, you can already achieve a query complexity of a constant. In the second case, we now consider adaptive adversaries. And these are really the adversaries that we care about. These adversaries may change their behavior upon being rebound. And here, we proceed in two steps. In the first step, we again consider the same rewinding strategies of Kilian Petronk and so on, with the splitting factor n. And here, we are able to show a slightly worse bound. We show that the query complexity is logarithmic in the security parameter. And then finally, we apply this sparsification, stop, sorry. Okay, let me just say it. So in the final step, we use the sparsification step of Goyal et al, which essentially squashes this query complexity from logarithmic to a constant. So let me actually just show you these steps pictorially. So the strategy looks like this. We are going to look at the recursion tree of these rewinding strategies. The recursion tree looks like this. We take the main transcript, the main execution. We divide it into n parts here, and it's simply two for pictorial reasons. And then we take each of these parts, and we run it two times. Each of these parts is executed two times. And then we just repeat this process many, many times, essentially, we recurse until we reach the final level, where each of the parts is just size one. So now, let's consider some particular session, and let's count how many times the output message of that session appears in the entire simulation transcript. So let's say, for a given session, the output message appears in the second half of the protocol transcript. So it's in the right-hand side, half still, denoted by this red box. And now, when we rewind, we divide this red part into two parts, and run each of these parts independently. So again, let's say that it appears in the right part of this right part. And since the adversary is static, it will appear in the same halves always. The adversary cannot change its behavior upon being rebound, so therefore it's always going to appear in the same parts, either all the right parts or all the left parts. And now we can just repeat this idea going downwards the recursion tree. So the observation is as follows. At each level in the recursion tree, this outward message can only appear two times. And now the number of levels, if we use a spreading factor of n, then the depth of this recursion or the number of levels is a constant. And therefore the total query complexity will also be a constant. So let me just take two more minutes. Let's consider adaptive adversaries. We are going to do the same thing, but now the main challenges are falling. If we again look at this, the part where the outward message appears, the problem is that not only may it appear in the places where it was appearing previously, here since the adversary can change its behavior upon being rebound, it may also appear in the first half of the tree. And this is because that even if this message did not appear on the main execution when the adversary is rebound, it may suddenly start sending this outward message in the rebound executions. And this is what messes up the calculation. The previous bound of constant does not work anymore. So essentially the bad event is that the outward message may appear on the rebound execution even before it appears on the main execution. And to solve this problem, we use the indistinguishability of execution threads. So this is a property that is common to all known revining techniques that the behavior of the simulator appears the same to the adversary on all execution threads. And by using this property, we can show that the probability that this bad event appears once is a constant. Then we can just essentially repeat this idea and say that the total number of times the bad event can happen is only logarithmic. If it's super logarithmic, then essentially the probability becomes negligible. Right, so this is how we handle adaptive adversaries for the previous revining strategies. And the final step is this magic step where we essentially take the revining strategy of Gauil et al, which can be seen as follows. This revining strategy looks very much like the previous revining strategies except that a lot of the nodes in this recursion tree are simply dropped. Essentially, polylog fraction of nodes at each level are simply deleted. And because of this step, essentially even if we had a logarithmic query complexity in the previous revining strategies, here we are able to essentially squash it down to constant because of this classification step. So essentially the bad event before was happening logarithmic many times and now essentially after specification we can show that it only happens constant number of times.