 Welcome to the talk about the paper Environmental Friendly Compositional Multipartic Computation as a plain model from standard time assumptions. This is joint work with Benton Brodnachs and Jürgen Müller-Graden. Secure multipartic computation, abbreviated by MPC, allows mutually distressing parties P1 to Pn to jointly perform tasks, like secure function evaluation, but the parties evaluate a function on the private inputs, to host the first price sealed bit auction where once the highest bid and the winner are revealed, the intersection of sets where nothing but the intersection is learned, or COVID contact tracing. Much generally, MPC can be used to perform arbitrary computations in a secure way. For MPC protocols, we usually require correctness and secrecy in the sense of input privacy, which means that taking part in MPC should not have a corrupted party to learn anything about other party's inputs that it cannot compute from its own input and its own output. We also want independence of inputs, which means that a corrupted party must not be able to choose its input in a way that is related to the input of an honest party. For example, we do not want a corrupted party taking part in an auction to create a bid that is one euro more than an honest party's bid, even if the corrupted party doesn't know what the honest party has bid. A very popular security notion for MPC is universally composable security introduced by Canetti, which captures the setting where multiple protocols are executed to them currently, just like in real life. What is nice about ESEcurity is that we do not explicitly have to consider all other protocols possibly executed concurrently with security proof. Instead, it is sufficient to consider an execution with an environment Z that mimics the outside world, including other possibly running protocols. In order to argue about the security of a protocol Pi, we compare its behavior to an ideal functionality, yet noted by F, which is incorruptible and performs the desired task by definition. If the execution of the real protocol Pi and the ideal functionality F are indistinguishable, then nothing bad can happen in the real execution, as nothing bad can happen in the ideal execution by definition. Towards the end, we have to prove the existence of the simulator. The simulator lives in the ideal world and simulates the execution of the real protocol. In particular, the simulation must succeed without knowledge of the secrets of honest parties. Informally, we say about the protocol Pi, you see emulates or you see realises a functionality F. Before all adversaries A, there exists a simulator, such that for all environments Z, the output of Z in an execution with Pi and A is indistinguishable from its output in an execution with F and S. Let's see meaning the simple notion is closed under composition. Meaning that a protocol Pi is a secure as F, even if it is executed multiple times or executed concurrently along other arbitrary protocols. Unfortunately, you see security comes with a major drawback. In the plain model, a setting where only authenticated communication and arbitrary complexity assumptions are available, it is impossible to construct AUC secure commitment scheme. This was recognized by Canadian Fischling. The intuition behind this impossibility is as follows. As the committer is corrupted, the simulator must be able to extract the value committed to. Informally, this means that it must break the hiding property of the commitment scheme. At the same time, a corrupted receiver must not be able to break the hiding property of an honest commitment. If the receiver is corrupted, the simulator must reduce a commitment which can open to an arbitrary value, contradicting the binding property. Yet, a corrupted committer must not be able to break the binding property. Canadian Fischling have shown that achieving this is not possible in the plain model. Instead, it is necessary to get the simulator some advantage over the adversary. With the C security, this can be done by using a setup like a common reference string, a public key infrastructure, a random oracle or some temporary hardware token. As the simulator measures the setup, it can embed a trapdoor which helps its simulation, but does not help corrupted parties. In reality, trusted setups are often hard to come by. If you had some trusted party to provide such a setup, you could perhaps use it for our MPC in the first place. As the reliance on the setup may be undesired, there is a long list of security notions and protocols featuring composable MPC in the plain model, which means that they only assume authenticated communication and some complexity assumptions. They all have different properties and rely on different assumptions. Commuter all approaches listed above is that the simulation is achieved by giving the simulator a complexity advantage over the environment. As a consequence, the simulation is not efficient and cannot be carried out in uniform polynomial time. Due to the impossibility results, you cannot hope that any of these notions are as good as the C security. Indeed, there is a price to pay. The following, our goal is to enable composable MPC in the plain model from a sum that have not been used for composable MPC before, which allows us to achieve properties that go beyond the state of the art. First of all, let's have a closer look at the drawbacks of previous solutions. A very important problem is the limited environmental friendliness introduced by Kennedy, Lin, and Pass, which I will motivate in the following. As environmental friendliness is a rather complicated notion, I will only be able to give you a rough idea. The general setting we consider is the execution of some MPC protocol in a new C-like experiment, as can be seen on the right. Alongside this MPC protocol in the environment, so to say, the execution of the protocol, which has some game-based properties like CCA security of a commitment scheme. You can see the CCA security game on the left. The notion of environmental friendliness deals with the effect of the MPC execution, in particular the simulation on security properties of other protocols. With the CCA security game, the CCA is short for chosen commitment attack, and adversary A prime interacts with the challenge of C. Like in the normal hiding game for commitments, the adversary first chooses two messages, M0 and M1. Here, the adversary also sends some tag. In the second step, the challenge assamples a unique form of random B and creates a commitment to Md using the tag provided by the adversary. The adversary must then decide if this is the commitment to M0 or M1, and when so, it guesses correctly. During the whole execution, the adversary may interact with the CCA oracle O. And the adversary sends a commitment to its oracle, the oracle extracts the commitment and sends the value committed to to the adversary. With the CCA security commitment scheme, access to this oracle does not have the adversary. Of course, the adversary may not simply send the challenge commitment to the oracle. Now let's consider the following. When receiving the challenge commitment C, the adversary somehow introduces this commitment C into the protocol execution on the right, for example in the name of some party participating in the protocol Pi. We assume that Pi is such that this commitment is never opened. We also assume that some other protocol party of Pi will create a commitment to 0, using a different tag, tag prime. The adversary can take this commitment C prime and forward it to its CCA oracle. It will then learn that this is the commitment to 0. Of course, it will not have the adversary to win the CCA game. Let's move back to the ideal execution. Again, the adversary forwards this challenge commitment C to the protocol execution. Now, the both are to simulate runs in super polynomial time and is able to extract the commitment provided by the adversary. In order to set up some simulation trapdoor, it creates a commitment of the same value as in the commitment C, while again using a different tag, tag prime. As this commitment is also never opened and hiding against polynomial time adversaries, a polynomial time environment will not be able to see any difference. However, our adversary, a prime, can take this commitment C prime and send it to the CCA oracle. As the tag is different from the challenge tag, this is allowed. The adversary will then learn that the commitment C prime is to Mb, allowing it to win the CCA game as probability one. This example illustrates that a super polynomial simulation negatively affects both the security of other protocols that are executed concurrently. As a super polynomial simulation is the common trick to achieve composability, most notions for composable MC and plane model have limited environmental friendliness. Remember that in UC security, the simulation is efficient. UC security, thus, is friendly to all protocols executed alongside and through fifth notion of environmental friendliness. Our goal will be to also achieve environmental friendliness, but without using setup like in UC security. However, limited environmental friendliness is not the only problem of non-polynormal time simulation. And the simulator is given super polynomial advice. Both advice may contain secrets of protocols that have started previously. This is true even if these protocols are secure against non-uniform adversaries, like in the case of UC security. As such, non-uniform simulation can also negatively affect the security of other protocols, which is undesirable. Our goal is an orphan that can be achieved through uniform simulation. Yet another problem of previous approaches is the limited UC reusability. Suppose that we construct a composable commitment scheme that is SPSEQ. Due to the super polynomial simulation, we cannot plug this commitment scheme into any UC security MC protocol role in the hybrid model because the simulation can negatively affect the complexity assumptions made inside the MC protocol role. The reuse of UC protocols is thus limited and special cares will be taken. Our goal is a notion with full user usability. As the UC security is closed under composition, it suffices to prove the security of one protocol instance. It then follows that multiple instances retain the security when executed concurrently. Thus, security notions this may not be the case. For example, SPSEQ only has a simple instance composition theorem. So when we construct, for example, an SPSEQ commitment scheme, we have to manually prove that multiple instances remain secure when executed concurrently. Fortunately, open-locking such as the entry-based security are closed under composition, making this unnecessary. However, those still suffer from limited UC reusability and limited environmental friendliness. So what we actually want is a security notion that there is no complexity asymmetry between environment and simulator in order to have universal composability. At the same time, there has to be some advantage for the simulator. Otherwise, we have the same impossibility results as the UC security in the plane model. And for UC reusability and friendliness, we want the simulation to run in uniform polynomial time. All of this is, of course, fulfilled by UC security. But in the plane model, we arrive at a contradiction. Again, when looking back to the previously mentioned notions of a composable MVC in the plane model, we had non-polynomial time simulation strategies. Also, the asymmetry between simulator and environment helps throughout the whole execution and was very large, which led to a number of problems. In this paper, we try something completely different. First of all, we consider a setting of the simulator's advantages very small in comprised of a polynomial number of computation steps only. For the small, the simulator's advantages is only temporary. This means that environment and adversary will be able to catch up. Again, this is totally different to what has been done before, but will lead to a number of advantages as we will see soon. First of all, the obvious consequence of this change is that the simulator is no longer able to break polynomial time primitives, not even indirectly. This is, of course, good for environmental friendliness, friendliness to previously started protocols and for the UC reusability. At the same time, we need primitives that can be broken by the simulator. Towards the end, we will take a look at time primitives, particular time commitment schemes. In our setting, the environment will eventually be able to break time assumptions too, but thus can be used only as a short-term property for the simulator. However, as we will see, this short-term property is enough to establish a long-term property that does not rely on time assumptions. This is something new. In previous frameworks, the complexity asymmetry between simulator and environment was permanent and one was not faced with the challenge that the simulation property will become available to the environment at some point during the execution. Our first result is a composable commitment scheme in the plane model. It is fully environmentally friendly, constant round, expect box use of its building blocks only and can be constructed from standard and standard time tartness assumptions. Also, a simulation is uniform. Using this commitment scheme, we construct a composable general MPC protocol in the plane model with the same properties. Actually, we have the first to achieve these properties together. Last, but not least, we also propose a variant of UC security to analyze the security of all constructions. In contrast to UC security, it allows the use of time primitives like time commitment schemes. The notion is called TLUG, which is short for time log UC. As TLUG is mathematically a special case of UC security, no changes to the framework are required. TLUG security is fully environmentally friendly, fully UC compatible, features full UC reusability and the single instance composition theorem. Now, I will present the construction of our composable commitment scheme. First, let us revisit a variant of the commitment scheme due to Prodnex et al, which is based on the commitment scheme of Damgard and Scafuro. The idea behind this commitment scheme is to compile an extractable commitment scheme into a commitment scheme that is both extractable and equivocal. This transformation is black-black in constant ground. In the first round, the committer, which is input B, creates kappa pair-wise-secret shares of its input and commits to these shares where kappa is the security parameter. In the second round, the receiver samples a uniformly random index vector i of length kappa and commits to i. In the unveil phase, the committer first sends all of its shares but does not unveil the commitments yet, which is crucial as we see later on. In the next step, the receiver unveils the commitments to its index vector i. Only then will the committer open commitments but only those indicated by i. So if the first bit of i is 0, it will open the commitment of the first share of the first share pair. If the second bit of i is 1, it will open the commitment of the second share of the second share pair, and so on. The receiver accepts if these unveiled values are consistent with the shares and in the first round of the unveil phase. It is easy to see that this commitment scheme is extractable. The file is to extract the share commitment sent in the first round. As the commitment to the index vector i is hiding, a corrupted committer will not be able to send shares of a different message in the first message of the unveil phase, except with negligible probability. Conversely, if the receiver is corrupted, it can create an equivocal commitment as follows. First, the committer commits to an arbitrary value, it says 0. When it learns the value it wants to unveil the commitment to at the onset of the unveil phase, it sends shares in the first run of the unveil phase as follows. Shares that are in commitments that will eventually be opened are sent unmodified. The other shares, whose commitments will never be opened, can be changed, that lets it reconstruct to the desired value. This is possible if the committer is able to extract the commitment to i, as i determines which commitments will be opened and which will not. In our setting, we will not be able to use the strategy as it requires commitment schemes that are straight line extractable by the simulator that remain hiding otherwise. As our simulator is polynomially bounded, such commitment schemes cannot be constructed as our setup. Instead, we will try the next best thing, which are time commitment schemes. Very informally, a time commitment scheme is like a normal commitment scheme, but the hiding properties are only guaranteed against adversaries whose runtime is bounded by some value t. This alone is not useful because t may be very large, for example, as the commitment scheme is hiding against polynomial time adversaries. It thus requires that there is also an extractor algorithm with runtime at most capital t. If capital t is polynomial, such commitments can be extracted in polynomial time by our simulator. Fortunately, time commitments can be constructed from a number of assumptions, such as the generalized plumb-plumb-sharp assumption as demonstrated by Bonnet and Nauer. Constructions from weaker assumptions, such as sequential functions, are also possible. However, the resulting protocols do not have an efficient commit phase. When talking about time commitments in the setting of composable security, a relevant result is the TARDAS framework by Baumetal. TARDAS is an extension of the generalized VC framework with an notion of abstract time. This allows to define ticked functionalities for building blocks such as universally composable time block puzzles. These composable time block puzzles can then be used, for example, for imperceivable standard properties. However, it can be shown that random oracles are required for such composable time block puzzles. So the goals of TARDAS and our goals are completely different. We use standard time commitments to construct a non-timed composable commitment scheme in the plane model. In particular, we do not model composable time block puzzles. Getting back to the commitment scheme of PortNex.al, let's see what happens when we replace the extractable commitment scheme used there with a time commitment scheme. If we use the time commitment scheme to commit to the shares, we essentially obtain a time commitment scheme. So I actually do not need this construction. If we replace both commitment schemes with time commitment schemes, we get protocols that is not even a time commitment scheme. This is because we have lost the binding property and lost extractability as required in the definition of time commitments. As a last try, we can replace on the commitment scheme for the commitment to the index that I is the time commitment scheme. The resulting protocol is the commitment scheme that is hiding because the commitment scheme for the shares are hiding. If an honest receiver only accepts the first one message of the unveil phase if the hiding property of the time commitment to I still holds, then we can show that this commitment scheme is time binding. Looking ahead, we will allow parties to set up times and check the expiry. With this capability, the abort condition for the case that the shares are sent to late can be part of the protocol description. A polynomial simulator that can play time commitment schemes instantaneously can then extract the time commitment to I before sending the shares on the first ground of the unveil phase and thus equivocate the commitment. We'll make sure that this extraction is only possible for the simulator, but not for the environment or the adversary. We are seeing that we cannot just adapt the commitment scheme of bondex that are to our setting by replacing the extractable commitment schemes with time commitment schemes to receive a simulatable commitment. However, the variant where we only replace the commitment to the index vector I is useful as it suffices to construct a coin toss that is semi-simulatable. In our simulation, this index vector I serves as the short-term trapdoor and we will use the semi-simulatable coin toss to establish a long-term trapdoor. Intuitively, a semi-simulatable coin toss is a coin toss that is not fully simulatable, but only simulatable if one designated party is corrupted. If you think the commitment scheme we have talked about previously, the resulting coin toss is simulatable if the receiver is corrupted and the simulator plays the committer sending the first message. In the other case, namely if the committer is corrupted and the receiver is honest, we can show that the result is not simulatable, but essentially uniformly random. If you had a fully-simulatable coin toss in the plain model, we could use it to replace the uniformly random common reference stream of a UC secure protocol. The resulting protocol would then also be the plain model, which is what we ultimately want to achieve. If the coin toss is only semi-simulatable, SCIS needs to have additional properties if we want to replace it with our coin toss protocol. In a two-party protocol, we need a trapdoor for each party. As such, we must be able to split the SCIS into two parts, one for each trapdoor. We can then ensure that the part for the corrupted party is simulatable. As we can only guarantee that the other part of the SCIS uniformly random, the simulation may not rely on the trapdoor for this party to exist. Now we have to find a UC secure protocol with these properties. As it turns out, the very first universally composable commitment scheme called UCC1 time due to Kennedy and Fischlin has this property. First of all, the scheme assumes the existence of trapdoor pseudo-random generators that can random build strings of length kappa to pseudo-random build strings of length four kappa. A trapdoor PHG is like a PHG, but has a trapdoor which allows to efficiently recognize the image of the PHG. The CIS of the commitment scheme consists of two public keys, PK0 and PK1 of the PHG, as well as a uniformly random build string sigma of length four kappa. If trapdoor one-way permutations with dense public description exist, then we can construct a trapdoor PHG with the uniformly random keys resulting in a CIS that looks like a uniformly random string. In order commit to some bit B, the committer evaluates the PHG on the public key associated with B on uniformly random string R. If B is one, the pseudo-random value is masked with the string sigma, otherwise it is left unmodified and sent to the receiver. In the unveil phase, the committer sends B and the randomness R used in the commit phase. The receiver accepts if the commitment C, the bit B and the randomness are consistent. Now let's take a look at the simulator. In the simulation, the simulator chooses the public keys PK0 and PK1, such that it knows the corresponding secret keys SK0 and SK1. This will later allow extraction. The second part sigma of the CIS, which is the uniformly random value, is replaced by the exclusive OR of the PHG evaluated on input PK0 and random string R0, and the PHG evaluated on input PK1 and the random string R1. As the committer is corrupted, the simulator receives the commitment C. Using SK0, it checks the C's in the range of the PHG used with public key PK0. If this is the case, it sends 0 to the commitment functionality. Otherwise, it sends 1 to the commitment functionality. In the unveil phase, the simulator receives B and R from the committer and performs the same consistency check at the receiver world. If this check succeeds and the extraction was successful, it allows the output of the commitment functionality. As we see, the simulator only uses the extraction trapdoor. Now, if the receiver is corrupted, the simulator is notified by the commitment functionality that the commitment has happened, but not of the value. It thus creates a dummy commitment C as PHG of PK0 and R0, which is the same R0 used in the CIS. Then, it allows the output of the functionality. When the simulator lands the B in the unveil phase, it sends B and R B and it allows the output. It is easy to see that the receiver will accept the commitment sent in the unveil phase because the CIS has been prepared appropriately. Again, you see that only the equivocation trapdoor was needed. To summarize, if trapdoor one-way permutation systems public description exist, then we can instantiate the commitment scheme and use the C1 time with the uniformly random CIS. This CIS can then be replaced by two semi-simulatable coin tosses. If the committer is corrupted, the extraction CIS will be simulatable and equivocation CIS will be uniformly random. If the receiver is corrupted, the equivocation CIS is simulatable and the extraction CIS uniformly random. We thus have used a short-term trapdoor to get an equivocal commitment scheme and use it to establish a long-term trapdoor. All in all, we get a commitment scheme that is simulatable, meaning it is extracted with an equivocal and in the plane model. Semi-simulatable coin tosses have been previously used by Dachmann-Zollett et al. to obtain adaptively secure composable ampersie from the C-puzzles. While we focus on static corruptions and polynomial-time simulation from timed assumptions, Dachmann-Zollett et al. consider adaptive corruptions and general constructions from given-you-C-puzzles. That construction is also non-concentrant and non-plugbox. Unfortunately, we are not done yet because our commitment scheme may not be composable. To this end, consider the following man in the middle adversary that takes part in two instances of our commitment scheme. In the first instance, the committer is honest and the adversary plays the receiver. In the second instance, plays the committer while the receiver is honest. What the adversary can do now is relay messages between the instances. It can take the first commitment sent by P1 for the first coin toss and use it as its own commitment in the execution with P3 and so on throughout the whole protocol. Using this strategy, the CIS will be the same in both instances, leading to a commitment that is malleable. In order to prevent this attack, we need a commitment scheme for the coin tosses that is non-malleable, even if the adversary receives equivocated commitments. This property is called simulation solveness. In our setting, we only require the simulation solveness to hold for adversaries that cannot bake the timed commitment scheme in time. As such, we consider timed simulation solveness only, which is a similar relaxation to the simulation solveness but the timed hiding property is to the normal hiding property. Looking back to our commitment scheme, it suffices if compound is parallel to CCA secure and extractable. Informally, parallel CCA security is like normal CCA security for commitments, but the adversary has limited access to the CCA oracle. In this case, the adversary may perform a polynomial number of commitments with the CCA oracle in parallel, which are all unveiled at the same time. In contrast to normal CCA security, the weaker notion allows for more efficient constructions. With respect to the timed commitment scheme, we do not need to assume any non-malleability properties. Again, in the commitment scheme constructed here, it is important that the receiver will only accept the first round of the unveil phase if the timed commitment to I is still considered to be hiding. Let me summarize. We have constructed a composable commitment scheme in the plain model from timed commitment schemes, parallel CCA secure commitment schemes and trapped a one way permutations with dense public description. The construction makes Blackbox use of its building blocks only, is constant round, exhibits a uniform polynomial time simulation and the fast environmentally friendly. With respect to possible building blocks, we can take the construction from Bonnie and Nao for timed commitment schemes from the generalized Blam Blam shop assumption. Blackbox parallel CCA secure commitment schemes can be constructed from DDH using the scheme of Goyale et al. With the CCA security, protocol parties are not aware of the computation steps performed by other entities. As a consequence, honest parties can not use, for example, timed commitments because they do not know when the security expires. With T-Log security, we will propose a variant of UC security that is suitable for timed primitives like timed commitment schemes. In T-Log, honest parties can set up timers by sending a message timer together with an ID and a time out value denoting after how many computation steps the timer expires. To check for the expiry of a timer, protocol parties can send a message notify together with the timer ID. If they reply, a bit B indicates if the timer has expired or not. Depending on this answer, parties can then, for example, abort a protocol execution because the security guarantee is not met anymore. In more detail, we can amend our composable commitment scheme such that the timed commitment to the index vector I is protected by a timer. If the first message of the unmade phase is received, the commitment receiver can check if the timer has expired. If this is the case, it can abort. In T-Log, we reply environment and adversary to add yet to timers. This means that the environment must answer correctly if a timer has expired. Intuitively, this models the ability of protocol parties to use timed primitives with the appropriate parameters that are secure in real life. For example, one can set up a timed commitment that is estimated to take a week to break but expect another protocol message depending on the security of this timed commitment within 10 seconds. At the same time, the simulator does not need to adhere to timers. This allows it to break time primitives like timed commitments. Indeed, this temporary advantage is sufficient for simulation in the plain model. But formally, instead of considering all environments and adversaries like in-use security, we only consider a meaningful subset, namely those environments and adversaries that handle timers and expiry checks correctly. For a legal adversary, we require that it immediately forwards a timer message received by a party to the environment. Conversely, it also immediately forwards notify messages from the environment to the intended party. For a legal environment, we require that it honors the answers in various if a timer has expired. The environment needs to do this relative to all steps performed in the presumed real execution of the protocol Pi and Adversary A. It is important that the environment also counts the steps of honest parties as they might perform computations that help breaking time assumptions. Actually, the rules are more complicated in order to allow proofs of the properties of the framework to go through. If you are interested in the details, please read the full version. Of course, we cannot expect adversaries in real life to perform honestly or to away timers. However, this is not necessary as long as real protocol parties are able to establish an upper bound of the adversaries' capabilities. With many hardness problems that deal timed commitments in a parallelize, we believe that this is a realistic assumption. If an honest party could set up a timed commitment that is estimated to take a week to break, but expect another protocol message depending on the security of this timed commitment within 10 seconds. Before we state the definition of Delac protocol emulation, let's have a look again at the definition of UC emulation. Informally, we state that protocol Pi emulates a protocol Pi. If all adversaries A that exists to simulate S such that for all environments Z, the output of Z in the real execution of Pi and S is indistinguishable. Delac emulation is defined in total analogy to UC emulation. The only difference are the following. First, instead of quantifying over all polynomial time adversaries and environments, we quantify only over legal ones but handle timers correctly. Second, we explicitly parameterize the environment as the protocol and adversary relative to which it counts the computation steps that lead to time expiry. In the ideal protocol, the operator may have to send time commitments like in the real protocol and rely on their time-tiding property. It is thus important that the environment is not able to break the time-tiding property prematurely. Another consequence of counting steps like in the real execution is that ideal functionalities that rely on time assumptions are not meaningful in our notion. However, such functionalities are also not meaningful in UC security. In contrast to previous notions, it is not for any ideal functionality relying on polynomial-time-hardness assumptions such as signature or encryption schemes. If we omit the non-uniform input with the environment, this holds even if these assumptions are only uniform. As we have restricted the class of allowed environments and adversaries, we cannot hope for all properties of UC security to carry over to our new notion. However, due to the fact that T-lux simulators run in polynomial time, we are able to achieve the important property of environmental friendliness. In addition, we can show a number of other properties like friendliness to previously styled protocols due to the uniform simulation. Moreover, T-lux security is compatible with the UC security in the sense that UC secure protocols are also T-lux secure. Unfortunately, T-lux security is not lost under composition. Similar to SPS security, we can only show the single instance composition theorem which allows the replacement of one subroutine. In practice, this means that we will manually have to show the concurrent self-composibility of T-lux protocols, as we did with our Composable Commitment Scheme. If you combine the above properties, you can show the following modular composition properties to the UC protocols. If a protocol pi T-lux emulates a protocol phi and a protocol rho in the phi hybrid model, you see it emulates a protocol sigma. Then rho with a subroutine called phi replaced with a subroutine called pi, also T-lux emulates sigma. This leads to the following general strategy. We can take a UC secure protocol in the f hybrid model where f is some ideal functionality that cannot be realized in the blame model. We then concurrently realize f and t-lux which can be done in the blame model. As a last step, we plug our realization of f into the UC protocol. It is then guaranteed that the composite protocol retains its security. In contrast to the previous notion, this is possible for all UC protocols. For example, we can take the most efficient UC protocol for some task and realize its setup in an input independent pre-processing phase, both T-lux protocol in the blame model. While the definition of T-lux security is very similar to the definition of UC security, the above properties are non-trivial to prove. The common pattern in the proofs of UC properties is to create an environment Z' that internally emulates an environment Z as well as other drawing machines. Z' which acts merely as a wrapper than only forwards messages between the emulated entities and outputs what the internally emulated environment outputs. Of course, executing Z' takes more steps in executing the individual Turing machine separately. This is due to the emulation overhead. In T-lux, this leads to a problem. We want Z' to handle timers like that, and still be considered a legal environment. Due to the emulation overhead, this is not possible, because Z' would have to signal the expiry of timers earlier than the emulated Z, which is undervalued if it is emulated. To account for this discrepancy, the notion of legal environments has to be carefully adjusted. Let me summarize our contribution. With T-lux, we propose a variant of UC security that is suitable for time primitives such as time commitments. In contrast to previous notions for compatible NPC in the blame model, it features full environmental friendliness, to previously started protocols. By plugging our Composable Commitment Scheme into an appropriate UC Security General NPC protocol in the FCOM Hybrid model, we are at first to achieve Composable Empathy in the blame model with the following properties. The resulting NPC protocol has full environmental friendliness, is constant ground, makes blackboxes of its building blocks and needs standard and standard time assumptions on D. This concludes my talk, and I would like to thank you for your attention.