 So, now we have the next talk by Daniel Rausch in collaboration with HALFKüsters and Céline Chevalier on embedding the UC model in the IITM model. Thank you very much for the introduction. So as probably all of you know, universal composability is a widely used concept for defining and analyzing protocol security. Not only offers us with very strong security guarantees that hold true with an arbitrary polynomial time context, but also offers composability via so-called composition theorems that enable a modular security analysis and also allow for reusing security results later on in a different context. Just to make sure everyone here is on the same page, let me preview how you see security is defined. So you start by modeling the protocol that you want to analyze, the so-called real protocol, say some key exchange, and then also define an ideal protocol or ideal functionality that specifies the task attend in a perfectly secure way. So F is secure by definition. The real protocol P is then set to realize the ideal protocol F or is as secure as the ideal protocol. If for all possible attackers attacking the network of P, we can construct an ideal attacker or simulator attacking the network of F such that no environment can distinguish both cases where the environment essentially subsumes arbitrary concurrent protocols. Over the past two decades, since its initial inception, a large number of different models for universal composability have been proposed. All of them use the same basic idea, so also in terms of the security definition, but then they differ sometimes very drastically in how they implement this idea on a technical level and also in terms of features that they offer, say, in terms of supported protocols and composition types. The relationship between these models so far is mostly unexplored. So how do they relate in terms of expressiveness if we can analyze the protocol in one model? Can we also analyze the protocol in every other UC model or universal composability model? If one model supports a specific feature, can we implement the same feature also in a different model? What about the strengths of security results? Do we perhaps accidentally miss some attacks since we, due to a lack of better knowledge, chose some model that only provides strictly weaker security guarantees than a different one? And given the fractured state of the literature, what about the original goal of reusability? If we have several protocols, each of them having been analyzed in different models, can we still combine, still compose those results? Say by first mapping a protocol, including all security results to a different model. So our goal is to initiate a line of research that fills this gap in the literature by formally relating models for universal composability. This not only allows for an educated choice of a model, say in terms of the strengths of the security results that one can obtain, but this also should allow us as far as possible to map and then reuse protocols, security results, and also features from one model to other ones. In our work, we start this line of research by relating the UC and IATM models. The UC model, we chose because it's the predominant model in the literature. So there's an extensive array of existing protocols and security results that have been analyzed in a wide variety of settings. The IATM model, on the other hand, offers many interesting features such as seamless support for protocols and composition with joint global arbitrarily shared state. It can express the globally shared session IDs that are commonly used in the UC model, but can also express locally managed session IDs, and it can combine all of the above. Furthermore, there are protocols that have already been analyzed in the IATM model, but have not yet been analyzed or captured in the UC model, thereby making a comparison and ideally also mapping from one model to another one particularly interesting. So as the title of our paper already reels, our main result is that the UC model can be fully embedded into the IATM model. A bit more specifically, we first relate both models in terms of the concepts that they use, which often try to achieve a similar overarching goal, but drastically differ in their technical details. We then propose a mapping taking arbitrary UC protocols to corresponding IATM ones and show that our mapping preserves security results and composability results. So as an immediate practical benefit, this means by using this mapping we can combine existing UC results with the four mentioned IATM features. We also took a look at the other direction and identified that a full embedding of the entire IATM model into the UC model is impossible in general. Along the way, we found and fixed several issues in the UC model that formally invalidate the UC composition theorem and also present a modeling technique that enables a new type of composition based on existing composition theorems. So in the rest of my talk, I will give you an overview of our main results and I want to start by giving you some examples of the concepts used in the UC and the IATM model to illustrate why a comparison of both of them is non-trivial to begin with. So let me start by briefly recapping the computational frameworks. The UC model considers an environment and an adversary running with some protocol pi. The protocol pi consists of several instances where only highest level instances can actually interact with the environment and each of these instances is identified uniquely via a global extended ID which consists of the code of the instance, a party or process identifier and a session identifier. Instances can then send messages to each other by issuing an external write command of the following form which consists of a forced write flag that distinguishes different types of external writes, a real sender flag, a target receiver tape which can be the input output or backdoor tape, the extended identities of both the sender and receiver including their codes, a number of import or runtime tokens that the sender forwards to the receiver and of course the message body itself. So given such an external write command that UC model then interprets this and delivers the message which might for example lead to the creation of a new instance with the intended receiver ID. There's one special case though namely a send highest level instance provides some output then this might get rewritten and redirected to the environment instead allowing other protocols essentially higher level protocols to connect to this protocol. So let's compare this to the IRTM setting which also considers an environment and an adversary running with a protocol. However there we already have one difference namely a protocol in the IRTM setting considers a statically fixed number of machines each of them specifying some machine code that is used in the protocol. These machines can then be connected with each other the environment and the adversary using tapes which allows for sending messages. One detail here is that the IRTM model also allows for connecting sub-routines to the the environment which is used for example to capture clover state in the IRTM model and then during a run of the protocol each of these machines can spawn an unbounded number of instances. So basically machines are like classes and can be used to derive several objects or instances during a run. A machine instance can then also send messages to other machine instances by writing it directly on a connected tape so this message gets then delivered to MC3 in this case specifically the existing instances of this machine now run a user specified check address algorithm that is part of the machine code to determine whether they are the intended receiver of this message. The first instance that accepts gets to process the message and if none of them accepts then a new instance is created that processes the message. This flexible addressing mechanism is used for example to model different types of joint and shared state. So as we can see already the computational models are very different. Let me give you another example namely runtime notions. Both models want to achieve that overall system is essentially simulatable by a polynomial time machine which is necessary for composition. In the UC model this works by letting the environment start with a polynomial number of runtime tokens which the environment is then allowed to distribute to the attacker and the protocol throughout the protocol run. So one possible distribution at some point in a run might look as follows. All instances including the adversary and environment are then required to run in polynomial time in the number of their currently held runtime tokens. Meaning that since the environment can determine how many tokens the adversary and the environment protocol receive the environment can also determine how much runtime the protocol and the simulator have possibly forcing them to stop at some point which then also has to be taken into account in our security proof. To somewhat alleviate this issue the UC model also requires environments to be balanced so the adversary and in particular the simulator must receive at least as many runtime tokens as the protocol. The IITM model takes a more abstract approach that is based on results by Hofheins et al. Basically it says environments are required to be PPT and protocols must be such that if they are combined with some arbitrary environment the combined system runs in overall polynomial time. So this doesn't fix a specific mechanism but this abstract notion is rather naturally met by protocols from the literature. For example if you can see your protocol to run in polynomial time in all of its inputs or the length of all of its inputs then it meets this notion. This is then extended in essentially the same way to adversaries. If we add an adversary to the system then this must still for arbitrary environments run in overall polynomial time. Let me give one final example namely composition. So the UC theorem considers the setting that we have already shown that some protocol pi realizes some protocol phi. The analysis is for a single protocol session and a restricted class of environments that adhere to some ready-catch sheet. We also consider a protocol row that uses potentially several subroutine sessions of phi and the theorem then implies that if we consider the composed protocol that rather uses pi as a subroutine then this composed protocol realizes the original protocol row. To express this composed protocol the UC model introduces some shell code that internally redirects messages to a different subroutine and also has a few conditions that need to be met. In particular the subroutines have to be shown to be subroutine respecting and subroutine exposing and the combined protocol row must be shown to be compliant. In comparison the ITM model considers two different types of composition. Firstly the main theorem essentially states if we have some protocol pi realizing some protocol phi and we build a higher level protocol on top of phi that connects to some but not necessarily all of its external tapes then the composed protocol where we replace the subroutine realizes this original protocol. In the ITM setting composed protocol is expressed by reconnecting tape and this statement holds true as long as the higher level machines connect only to external tapes of the subroutines. Observe that this directly considers a multi-session setting and that subroutines can still share some tapes with the environment which is why this theorem captures special cases also protocols with joint stage heart state and global state who also support single-session security analysis. The ITM model provides a second composition theorem which states that if a single session of pi realizes a single session of phi then an unbounded number of sessions of pi realize an unbounded number of sessions of phi. This statement holds true as long as pi and phi are sigma session versions so essentially have this joint state and can then also be combined with the previous composition theorem to yield more complex compositional statements. So let me briefly summarize as we have seen the UC and ITM models are quite different not only in their computational frameworks but also in their compositional statements and they even use different classes of environments adversaries simulators and protocols so it's not at all easy to see how they relate and given that they use different classes of environments and simulators whether there's a general relationship in the first place. Our work answers these questions more specifically we first propose generic mapping take an arbitrary UC protocol and constructing a corresponding IRTM protocol. I won't go into the details of this mapping here but let me give you some key insights. So first of all all aspects of the UC protocol can be translated naturally into the IRTM setting. This even includes subroutines that might have dynamically generated machine code in the IRTM setting we can capture this by including a specific universal Turing machine where one instance of this universal Turing machine directly corresponds to a UC instance whose code was dynamically generated and behaves in exactly the same way. Furthermore the IRTM protocol has to reveal an upper bound of all runtime tokens received so far to the adversary. This is necessary because in the UC setting the same information is guaranteed to be provided by UC environments as side channel information to adversary and also simulator whereas IRTM environments need not provide this information to the simulator. So we instead implement the same side channel on the level of the protocol and we finally propose a variation of this protocol which enforces the SHE identity bound on arbitrary environments on the protocol level so we can then use the existing IRTM theorems that don't reason about restricted classes of environments. So our main result then states that if we have some protocol pi and the protocol pi that or that realizes a protocol pi in the UC setting then our map protocol pi realizes the map protocol pi in the IRTM setting. We show this result we have several intermediate hybrid steps among others we show that UC security implies single session IRTM security for the class of adversaries that adhere to the UC runtime notion and then show again with several steps that this implies general IRTM security also for adversaries that might not meet the UC runtime notion. We are further able to show that the intermediate step also implies the UC security thereby showing that our mapping is non-trivial it doesn't just preserve security results but also distinguishing attacks however more generally full IRTM security in an arbitrary setting does not actually imply UC security namely we are also able to show that if time lock puzzles exist then there are UC protocols such that no simulator exists but for the mapped protocols we can construct an IRTM simulator and prove security. That's not a specific of our mapping but rather applies to pretty much any mapping that preserves the behavior of the UC protocols. The underlying reason is intuitively that the runtime of the IRTM simulator is allowed to depend on the runtime of the environment so our simulation can accommodate for particularly powerful environments that try to overwhelm the simulator. The same is not possible for the UC simulator which must essentially work independently of how much runtime the environment uses. Our theorem then also implies as a direct corollary that also composition results carry over namely if we start in the UC setting with UC protocols apply the UC theorem to obtain composed UC protocols then we can use our mapping to obtain composed IRTM protocols. Of course that doesn't tell us much how the composition theorems of the UC and IRTM model actually relate to each other which is why we additionally show that the same result can also be directly obtained in the IRTM model. Namely we can first map the original UC protocols into the IRTM model and then the IRTM theorem also implies security for the composed protocols. So in other words the suicide shows that the IRTM theorem captures the UC theorem as a special case by which I mean that it also applies if some of the input protocols aren't actually mapped from the UC model but are custom IRTM ones. So for example instead of considering the map protocol row we can build or use an existing IRTM protocol Q that uses the map subroutine phi and then the IRTM theorem also implies security for this case. So as for the other direction let me briefly give a summary. So as I mentioned the other direction of a full embedding of the IRTM model into the UC model is impossible in general since there are several gaps. One of them has already been shown in prior work namely has been shown that there are natural protocols that meet the IRTM runtime notion but not the UC runtime notion. Hence they cannot be expressed in the UC model. To this we add the aforementioned impossibility result due to different simulator classes so security might not necessarily carry over and we also identify another gap namely unlike IRTM protocols a UC protocol is required to provide an additional oracle to the adversary that reveals whether certain instances already exist. This is not just the cosmetic or technical difference but actually changes security properties whenever the existence of an instance depends on some secret information so we can or there exist protocols that can be shown secure but once we add such an oracle there is no simulator anymore. So let me summarize and conclude my talk. Our work is the first that clarifies the relationship of the UC and the IRTM models. On one hand we show that all UC protocols and security results carry over to the IRTM model meaning as an immediate benefit that existing UC results can now also be combined with all of the aforementioned IRTM features such as joint global and arbitrarily shared state, locally managed session IDs, larger classes of protocols and simulators and combinations of the above. On the other hand we also established that there are several gaps that make a full embedding of the IRTM model into the UC model impossible in general and leave it as interesting future work to identify and then also map a subset of the IRTM model and a subset of security results that still carry over to the UC setting. With that I conclude my talk and want to thank you very much for your attention. Do we have questions from the audience? Yes please come to the microphone so the people online can hear you. Hi thanks for the talk. How easy do you think it is to you know continue this line of work and maybe use some of the same techniques to continue to map out this universe you had in the beginning? So I think there are a few models that intuitively already share a few similarities with the IRTM models so for example the static structures of protocols that is used in the IRTM model doesn't seem so far away from the constructive constructive cryptography model which on the other hand for example doesn't fix a specific runtime notion but considers an arbitrary class of environments that the protocol designer can then customize so I wouldn't be surprised if there would be some interesting and not too hard to obtain results there in particular also in terms of identifying how exactly these classes of protocols and environments then fit into each other. Another interesting candidate might be the GNUC model which already structurally is on the one hand tries to keep the spirit of the UC model but then also has a few components such as a code library if I remember correctly which fixes the codes that the protocol uses which again seems more similar to what the IRTM model starts with so I can also imagine that there are some interesting results that one can obtain over there but of course the literature is very wide and varied there are probably also a few models that are so far at least at the first clients from the models I've personally worked with that it might be quite difficult to relate them or maybe they are even incomparable since I mean as I mentioned for the UC and the ITM case since we had different classes of environments and simulators it wasn't very clear whether there's a general relationship in the first place maybe one can do something for subsets in this case and at least identify maybe some sufficient conditions for a mapping. Thank you very much. So we have another question? Yeah one quick one maybe. Thank you very much for your nice presentation so I mean by now we usually have seen that for example more efficient protocols like as the snugs they are not UC so because of succene you don't have black box extraction so once we move them to the UC we lose efficiency usually so my question is about this mapping from UC to this IITM setting so do you have an again overhead to the protocol so we will lose again efficiency to achieve this properties? Excuse me what exactly was the final question? I didn't yeah I mean about the mapping you said that for example we show that it so for example UC protocols can achieve the same security or we have a map to do this I mean so my question is about this map how efficiently this map happens usually so how efficiently this mapping actually? Yeah for example is there any overhead how much this is? Well basically what it boils down to is that we implement on the IITM level those features and operations that are essentially provided directly by the computational model in the UC world so for example we add some code that simulates the external write command since this overhead in the UC case isn't directly counted as runtime of the protocol but rather somehow essentially happens externally there is a bit of technical overhead but as we already also show in our work this overhead that is added still remains polynomial in what the original UC protocol does otherwise in particular the mapping wouldn't work if we had some exponential close up somewhere then it wouldn't meet the IITM notion anyway. Okay so I'm going to use this one. Okay thank you. Do we have more questions from the audience? Yes please. Just like a general question if it were the case that you were trying to connect some of the other frameworks and that you described at the beginning together and you found that two of them are not compatible together would that may that imply that there's something wrong in one of these frameworks hypothetically? I think that strongly depends on the gap that you identify so in any case you would have to look very closely what goes wrong so for example if the gap is due to different simulator classes we would have to identify whether the simulator class of one of these models might be unreasonably large so permit simulation in cases where no simulation should be possible so maybe we miss a text by this or alternatively it might turn out that in one case the simulator class is just needlessly small that's actually something we also did as part of our work since we had such a gap with the simulator classes and in our case we argue that well the UC class is just more restricted than it need be while the IITM class provides reasonable security guarantees of course the gap can also be for entirely different reasons so for example you also have models that specifically include quantum operations in the Turing machines or don't consider Turing machines in the first place a better set so then you already consider an entirely different setting so the incompatibility might simply be because you consider more powerful adversaries and protocols with operations that cannot be expressed in the original setting anymore in which case the statement would just be well okay it's just for different purposes and different contexts that they model but both of them might still be reasonable okay thank you and we have the last question after which we're going to have to take the rest of the questions offline due to the approaching IACR membership meeting um yeah thanks for your presentation I was just wondering how come you guys don't consider GUC and JUC to be part of UC um so as for GUC there was some recent paper that on the one hand argued that the GUC proof itself isn't really complete but showed that there are modeling techniques that one can use to express global state in the basic UC model uh these would of course then also be captured by our mapping since it's for arbitrary UC protocols including those that use this modeling technique to capture global state uh but uh in this case uh I guess it's probably easier to uh if one wants to model global state to perhaps directly express this in the IATM model since our mapping then introduces some UC structures of this global state mapping for UC that aren't necessarily needed uh if one works directly in the IATM model as for JUC uh well this extension was for the original UC model which has already been overhauled depending on how you count there are at least three major updates that change a lot of things on a technical level uh so it's not clear whether the original JUC result even applies to UC anymore and uh also there has been work that has argued that the JUC model itself has been flawed in terms of the proof of the composition theorem so uh at least as far as I'm aware there are currently is no work that uh supports uh joint state either in the UC model directly or by extending the current version of the UC model in some way thank you let's thank the speaker again and that's the end of our session see you at the end of the session