 Thank you. Good morning, everyone. Welcome to crypto. So I'm going to talk about a simpler variant of UC security for the case of standard multi-party computation joint work with a rank on it in a soft coin. Probably all of you know in the setting of security computation, we have a set of parties with private inputs who wish to compute some joint function of their inputs while preserving certain security properties like privacy, correctness, independence of inputs and more and these have to be guaranteed even if some of the parties are corrupted and attack the protocol in some way. And the reason why this topic has had so much study over the last 25 years or so and is so central to cryptography is because it can model any cryptographic task or I say almost any cryptographic task. So if you prove feasibility or infeasibility or we construct general protocols and they have very very broad applicability and that's really what's made secure computation so central to the theory and and recently the practice of cryptography. We want to actually define security. We don't define it by looking at privacy, correctness and a list of different properties. But we try and think of what's the best we could actually hope for. The best that we could hope for is if there was just one person we could all trust. If there was just one person we could all trust, an incorruptible trusted party and we had magic ideal channels between each party and that trusted party then everyone could just send their inputs to the trusted party, could compute the function and return the output. And this would give us all of the properties and everything that we wanted. The problem is this is an ideal world so we don't really have such a trusted party but this is essentially what we want to emulate. And note that in this ideal world the inversory really can't do anything but just choose its input. So really there's nothing you can do to attack the protocol and everything is as we would want. In order to define security what we do is we compare between an execution of a real protocol and this ideal world and we say that very informally of course that a real world protocol is secure if it behaves essentially like this ideal execution or stated differently secure computation protocols emulate an incorruptible trusted party in a world without any trust. And so we can look at these secure protocols really as ideal boxes that are computed by trusted parties and then we can consider the world in that way and that makes things very clear and understandable. In particular we know exactly what security guarantees we're getting because these ideal functionalities are supposed to be very simple and very easy to understand. We'll see a bit later on that that's not always the case but at least that's what's supposed to be and when we have a very very clear and simple ideal functionality we know exactly what we're gaining and what the security guarantees are. A very very important property of security is that of sequential composition or modular sequential composition and this goes back to the earlier work and in particular 2000 and even earlier we consider a hybrid model and hybrid model the parties communicate with each other as usual sending messages but they're also allowed to speak to an incorruptible trusted party who will compute some sub-functionalities for them. So I may be constructing some large protocol and I may need to use commitments, zero knowledge of these transfer other primitives and I use a trusted party to help me compute those and then I build my protocol inside this hybrid model where I have both a trusted party and both regular communication. And then I prove security of my larger protocol application with these ideal calls inside to this trusted party. It actually facilitates a much easier analysis of the security of the protocol and I derive of course in the end a real protocol by replacing those ideal calls with actual secure protocols that compute those sub-functionalities. So I'll put some secure oblivious transfer some zero knowledge protocol etc etc and the composition guarantee is that the real protocol behave like the hybrid one. So and that's something which is given to me automatically as soon as these sub-function, soon these sub-protocols are proven secure they behave like ideal calls and now we can analyze the protocol in this much simpler way. And there are two really big advantages here. One is a simplified protocol design analysis which is very important. It's very hard to prove security properly so we know how to simplify that is very important. But also it's a very basic security guarantee. We want to make sure that we can run the protocol many times because if you can only run the protocol once and if as soon as you run it twice the security may break it's not going to be very useful. So this guarantees security even when I run many many times. The problem is it also has a huge limitation. The security is only preserved when the protocols are run sequentially. But in the real world protocols are run concurrently many protocols run at the same time different protocols designed by different people in different places around the world and the standard definition of security does not cover that case. And that's really where universal composability comes in and this new definition not new anymore but in 2001 it was new says that if you prove security according to this definition then security is guaranteed even when arbitrary protocols are run concurrently with your secure protocol. And so this idea that a secure protocol behaves like an ideal box like a call to a trusted party holds even in a very adversarial world with arbitrary secure and insecure protocols running together with your secure protocol. And the primary definitional difference between universal composability and the standard standalone definitions is the addition of an interactive environment entity. And this environment is essentially like an interactive distinguisher. It chooses the inputs for the honest parties and writes to their input type. It reads their outputs but also interacts online with the adversary. And the environment actually essentially captures the rest of what's happening in the world. All of these other possible protocols are running alongside a secure protocol are essentially captured by this environment and of course that's just the intuition. The formal proof of security composition tells you that if you prove security of your protocol with this environment involved then you're fine to run your protocol in any sort of setting. So that's really what the most fundamental contribution of universal composability is. This is the addition of this environment to the world. The problem is that things start to get complicated. And one of the reasons they start to get complicated is because over the years we've considered many, many types of or many different models for secure computation. We can talk about many different types of adversaries. Is adversaries semi honest or malicious? Does it are the corrupted fixed ahead of time? So is it static corruptions or are they adaptively chosen throughout the computation? Maybe they're proactive so they can be corrupted for part of the execution and afterwards the corruption goes away. Do we want fairness or not in our definition of security or guaranteed output delivery? Do we have other things in the world like common reference strings and random oracles? Is our network asynchronous or asynchronous to have authenticated or unauthenticated channels? Maybe do the parties have clocks they can use? Are we modeling only interactive protocols where parties interact with each other or maybe also local computation like an encryption and signatures and sort of random functions? And we start to look at this very, very complex world with many, many different options. And the big problem is that we have this fantastic composition theorem which tells us that if you prove your protocol secure, then everything will be fine. But that holds for whatever model is considered in the definition. So if I have a definition which holds for a specific model, then I'm in trouble when I want to go to a different model. I no longer have a composition there. So one of the things that the Universal Composability Framework does is it captures all of these models in one and much more. It's a very, very general framework and you can essentially capture, I don't know if you can capture almost any model that you can think of. And so with this composition theorem it's already proven for all the possible models that we can possibly want to study, which is fantastic. We don't need a different proof of composition for every model because as long as you can fit into the UC framework you already covered, also something additional is that what happens if I'm running a protocol in a synchronous network together with a protocol in a synchronous network? Is it okay for them to run together? If I have different composition theorems, I'm not sure, but once it's all together in a single composition theorem, then we can run protocols that are designed and actually even executed in different models. We can run them together and security will still be guaranteed. And that's given to us by the generality of the UC framework. We can also use modular analysis even for local computation like encryption signatures and pseudo random function. We don't have to do direct reductions to those primitives, but we can actually also look at them as ideal boxes, again facilitating a simpler proof of security. So that's the great generality, but with generality comes complexity. It doesn't come for free and it comes at quite a price. In order to model something like a pseudo random function or encryption, we're getting into trouble. The things that we normally take for granted as being simple are actually no longer simple at all. And one is just polynomial time. So machine is polynomial time. If it's a single polynomial that bounds, upper bounds, it's running time. But if you look at the definition of pseudo random function, for example, the adversary gets to query the oracle to ask for as many, to ask for the value of the pseudo random function on any number of points that it chooses. When you put this inside the UC framework, it's actually the honest parties who are computing the pseudo random function for the adversary. But the order of quantifies is that first you fix the honest parties, they're in the protocol, and then you quantify over every adversary. And that means there is no longer any polynomial upper bound on any of the honest parties because their running time depends on the adversary. So it's not polynomial time anymore. So you have to actually change the notion of polynomial time. Also, if I'm doing a local computation like encryption, it's clear that the adversary cannot schedule the delivery of a message between me and the ideal functionality because there is no external ideal functionality in reality. In reality, I'm just doing a local computation. In fact, the adversary can't even see that I'm doing a computation. So if we think about communication between me and the ideal functionality, something external, as we would in the classic standalone definition, and also in the classic way we think of interactive computation, you could no longer capture things like signatures and encryption. There's also this notion of generating or dynamically generating parties, and then you have other funny things that can happen, like let's consider every party running in linear time. And all that it does is generate another party that runs in linear time because you have to be able to dynamically, in the UC framework, you can dynamically generate parties. So now we can have an infinite execution where every party just generates another party and stops. So each party is linear time, but the overall execution is infinite, and this obviously is not what we want. So just simple things like polynomial time become really, really complicated. You also want to model partial corruptions, and I mentioned this sometimes you do communicate external ideal functionality, sometimes you don't, and this is actually what you get. Now, I just want a knife and a fork, but this is what I have. So I'm lying to him and I said I want a knife and a fork because doing secure computation even in a standalone world is not simple. It's much more than this. It has a plier and a screwdriver and a magnifying glass, but this is almost impossible to use. And essentially what's happened to the UC framework in order to capture that generality and enable you to get all of the benefits that I talked about, you ended up getting this. And I'm not criticizing the UC framework for it. This is the byproduct of getting all that generality, and we saw the generality has its benefits. But but now let's talk about usability as users of the UC framework, we're sort of in trouble. It's really overwhelming. And you know, we're used to it, I'm used to it. And I've been working in this for many years. And I find it overwhelming. And someone new to the field wants to use universal composability, looks at it and says, you know what, maybe I'll stay with just the standalone world. It's more, it's easy to work with. And that's actually not what we want. We want people to be able to come along and improve security comfortably in in in this world with with concurrent scheduling and the more adversarial realistic world that there is. So there's been a lot of work over the last 15 years refining the universal composability framework, different notions of polynomial time and how to formula how to formalize different different things in order to improve them. But in this work, we take a completely different approach. We don't want to come up with a different definition of polynomial time. We want to say, you know what, I want to, I want to work on the standard sort of interactive protocol setting. I don't want to model pseudo random functions and encryption. I don't want to model all these different types of strange settings that are not the standard ones. I just want to do a standard, I don't know, secure AES computation or secure online poker. I don't want this very complex setting. Can I can I get a simpler definition by looking at a more restricted restricted goal? And actually the vast majority of papers on that that use universal composability or in general see competition, especially when we talk about papers looking at efficient constructions of protocols have actually a very, very use almost all use this specific setting that have authenticated asynchronous channels, asynchronous because the adversary is in charge of the scheduling. There's a fixed number of participating parties, the number of parties in the protocol doesn't depend on the security parameter or depend on the input. We know that we're playing poker with five people. We're playing poker with five people. We know we're doing a secure AES between two parties and securities between two parties and that's it. And we don't look at fairness. And this is the model that we all used to. We don't need the full generative you see. We need a much, much we need a very, very restricted setting of it. And that's what this simpler variant of you see is giving you models only a subset of what you want of the possible settings. So it's only multi-party computation with a fixed number of parties with authenticated channels and there's no fairness. And the main design principle behind the definition was to come up with something which would be as close to the standalone definitions as possible. So if you studied the standalone definitions in advanced crypto course, the extra-tept universal conversability is there. It's not trivial. There is this additional online adversarial environment, but that's almost all you have to learn in addition. So the main properties are it's a fixed communication pattern. There aren't the way the parties communicate with each other is the same in both, in all of the models, the real hybrid and ideal models. Everything goes through a router and the adversary schedules the delivery of messages from that router. That's whether you're talking between honest parties, between the functionality. It all goes by the same communication pattern. It's a very simple notion of polynomial time. Parties are simply polynomial in the security parameter plus the length of all the inputs they get from the environment. That's it. Standard notion of polynomial time. And the order of activations of who goes when is very, very simple. It's adversary party, adversary party, adversary party. There's no complicated, no additional possibilities of parties activating each other back to the environment, back to the adversary. It's a very, very fixed set of steps. So it's easy to follow and easy to remember and easy to analyze. The case analysis that happens in a proof of security is much, much simpler because everything happens in a very, very fixed and specific way, much closer to what happens in the standalone setting. So one thing you may be thinking is, okay, nice. You gave us now another definition. We had UC. There's juicy. There's GUC. There's GNU. You see there's, and now you're giving us, and there's also a whole lot of other types of definitions. You give us another definition. That's exactly what's going on here, right? There are 14 definitions. We need a new one now to combine them all. Now we have 15 definitions. So no, that's actually exactly what we're not doing. And if this is the most important thing I'm going to say now in the entire talk is we do not want a new definition. There are actually simpler versions of UC which are floating out there. We are not this, that's not what we're doing. We're not trying to come up with another definition which is different to the UC. Rather what we've proven, and this is the most important part of the paper, what we've proven is if you work in this simpler variant of UC and you prove security there, then you get full UC security for free. So actually we're looking at a subset, at a restricted case of standard multi-party computation tasks. And if you prove security calling to SUC, then your protocol is automatically secure also in the full UC model. And this is nice because now you can work in this simpler variant and get full UC into operability. So if you're running together now with protocols that are in these other more complex models that we're not considering, security is still guaranteed because we have this equivalence for these protocols. I just want to mention there is some transformation and the functionality that you have to make because of the generality of the UC. But the protocol is still secure in the sense that you would expect it to be exactly as you would expect. And you only work in this much, much simpler variant. You automatically got the full UC security. The good thing is you can also write a paper and say UC security. I have to say in your title this is SUC secure. In fact the fact that you're using SUC can appear just in the first line of your proof of security. You write your introduction, you write your protocol, your motivation, you write your theorem, you get to the proof you say we prove this secure in the SUC framework and we derive full UC security via the theorem that exists in that paper. That's it. So now you can work simply without all of the hassles, without all the complexities, but you're still getting the full guarantees that you want and you're not working in some other definition, which is good because UC has become the gold standard. Typically if we want to look at proof of security end of the concurrent world, we want UC security. So you can use this definition, you still have UC security. It's not SUC security. It's UC security and that's really the most important contribution of this paper. Simple, but the same. So I want to just with the time that I have left, I want to demonstrate why it's simpler to work in the in the SUC framework. And this is actually the most important reason it really has to do with polynomial time and the way it's modeled in the UC framework, but that's in order to explain that I have to go into many, many details about how the UC framework is defined and I'd rather not do that. I assume that many of you would rather I don't do that either. So I mentioned earlier on the one of the advantages of this ideal real model paradigm is that ideal functionality is simple. We can look at them and know exactly what we're getting. So this is the way we define a secure commitment in an ideal world. There's an ideal functionality FCOM. The party committing party sends a commitment message to the ideal functionality. The ideal functionality records that message and sends a receipt to the receiver. The receipt just says, here's an identifier. And you should know that this value has been recorded, but I don't tell you anything about the value. And then when the committer sends an open message with that identifier, the trusted party retrieves the original value from storage and sends it on. So this binding is clear because the value was stored by the ideal functionality. And hiding is information theoretic in this ideal world because the receiver learned nothing about this receipt until the opening happened. So this is very, very simple. It's very easy to analyze protocols, the security protocols given in a hybrid model with this commitment functionality. Everything is very, very nice and we're all happy. But if you look at full UC commitments, then you suddenly see all these other complexities that are more difficult. So firstly, when you send this commitment message, it's not sent to the functionality. Rather, actually it is sent to the functionality, but the functionality doesn't process it. It generates something called a public delayed output. A public late output is a way of saying that it gives this message to the adversary to decide when it will be delivered. So in the simple UC variant, you're interacting externally. So you send the message to a router and the adversary decides when to deliver it. So this is built in. And that's actually the way things always happen in a world where the adversary schedules messages. But because the UC framework has to model local computation, it's not automatic that the adversary can schedule. So you have to explicitly tell the commitment functionality to allow scheduling. And likewise, with the opening, but when you get to the third line, it becomes even stranger because now for some reason the functionality is processing commitment, corruption messages. But what does corruption have to do with the functionality? Well, again, this has to do with the generality of the UC framework. And it's already very far from what we used to in the standalone model. And the previous one, the previous commitment is really what we'd expect and what we'd like. And essentially that's what you're getting because again, it's equivalent when we look at the full UC framework, but you need to only work with this simpler, more concise definition. So simple UC versus UC, polynomial time is simple in UC. It's simple in simple UC. It's very complex in the full framework. There are funny things happening like you have to pad the input length and the functionality. Definition can actually depend on how you implement the protocol because of the way polynomial time is defined, which makes things very complex. Corruption is simple in SUC. It's exactly like in the standalone model. The auto-activation is fixed. It's really sort of the way that people work anyway. And I'll talk about that a little bit in the last slide. It helps a lot both with the fighting protocols, but also with the proof proving security because things are as we would expect. So in summary, we introduce a fast, simpler version of the first possibility. We call it SUC or simpler UC. It adds an environment like it has to and formalizes a multi-execution polynomial time, but in a natural way. It has a simple series of activations communication pattern and it's less difficult to work with, both for seasoned researchers who have been in this computation for 15 years and also for people who want to get into the field and want to prove security concurrency but are overwhelmed by all the possibilities given to them. Very importantly, we prove equivalence. We prove that for the subset of protocols that can be modeled in the SUC framework, it's fully equivalent to the full UC. And so this gives the properties that I mentioned beforehand. Now if you look around at papers written in the proof security in UC, almost no one or no one, actually we didn't sit in finding any paper which gives a complete full proof of UC security because it's just too complicated. So our work in some sense justifies this, but not justifies this by saying it's okay to hand wave, but justifies it in saying that, yes, you can work in a simpler model, as in the way that you think in the conceptual way, but you can also do it rigorously. Now you don't have to do that and be hand waving what your proof can now be a full rigorous proof of security. And you can get it in a simpler model by just saying that refer to the proof of the proof equivalence and the SUC paper and you can continue working in a way that you want to work and come back and work in the UC world. Thank you very much.