 Hi, I am Ljerm and welcome to my presentation on giving an adversary guarantees or how to model designated verifier signatures in a composable framework. This is joint work with Willi Maurer and Christopher Portman. Security notions normally capture what dishonest parties cannot achieve. For example, let's consider an ideal resource capturing authenticity. We consider a setting with five parties, Alice, Pop1, Pop2, Pop3 and Eve. Authenticity is captured by an Authenticity channel out to which Alice can write and from which each of these pops can read. Eve, who is a dishonest party, interacts with a simulator. Since we want to capture authenticity, this simulator can only read from this Authenticity channel. This Authenticity channel essentially is capturing a guarantee that if one of these pops reads a message, then Alice wrote it. However, some security notions capture the exact opposite, namely that dishonest parties must have some capability. For example, let's consider multi-designated verifier signatures. For these signatures, Alice can designate the receivers of her messages. Let's say it's the three pops from before. Then authenticity is given exclusively to these pops, so only one of these pops can learn if Alice is the one sending a message. In particular, Eve cannot tell if Alice sent a message, even if any subset of these pops is dishonest. This property is called off the record. Let's see how this is possible. So if all pops are honest, then any dishonest party can pretend to be Alice sending a message. And Eve simply cannot tell if Alice sent a message, or if it is some dishonest party pretending that Alice sending a message. If some or even all of these pops are dishonest, then these dishonest pops can cooperatively simulate Alice sending a message. And again, Eve cannot tell if Alice sending a message, or if the dishonest pops are simulating Alice sending a message. And this must hold, even if Eve knows all of the dishonest pops secrets. So in any case, Eve cannot tell if Alice is sending a message, or if someone is simulating Alice sending a message. I want to make a note here that off the record does not violate authenticity. Off the record guarantees that Eve, who is not a designated receiver, cannot tell if the sender sent a message. Authenticity guarantees that honest pops can. Let's see how can one model authenticity in the off the record guarantees together. So now let's consider again the same setting as before. And let's have a look at the real world system. So in the real world, there are two assumed resources. There is a key generation authority, KGA, which is responsible for generating the key pairs of each party. And there is also an insecure channel to which everyone can write and also from which everyone can read. Alice being honest runs a converter send. And this converter essentially gets the public keys of all parties. So Alice POP1, POP2 and POP3 illustrated here. And also gets the secret key of Alice for sending messages. So whenever Alice input some message M into this send converter, the send converter uses its public keys and secret key of Alice to generate a signature on this message, which it then writes together with a message into this insecure channel. Each of these bulbs runs a receive converter. And these receive converters all get the public keys of every party. In addition, each of the receive converters get as input the secret key of the corresponding Bob. Finally, Eve, who is a dishonest party, can interact with the insecure channel and only has access to the public keys of all parties. Well, what if Bob3 is dishonest? So in this case, Bob3 does not run a converter anymore. But furthermore, in addition to that, now Bob3 key, the secret key of Bob3, also leaks to Eve. Now, let's have a look at the ideal world. As before, we have an authenticated channel to which Alice can write and from which the honest Bob's can read, Eve and Bob3, who are dishonest, interact with a simulator and the simulator can read from the authenticated channel at Eve's interface and can both read and write from the authenticated channel at Bob3's interface. However, notice that this ideal world does not actually capture this off the record guarantee. Why is that? Well, first, notice that actually any infargeable signature scheme satisfies this composable notion. Why? Because a simulator could simply not write into this authenticated channel. And in this case, Eve would of course know that if something shows up in this authenticated channel, then it must have been Alice writing. So how can we guarantee that dishonest Bob3 could have written? Or more generally, how can we model that dishonest parties have some capability? We model this by using the notion of specifications introduced by Maurer and Renner, recall the authenticated channel from before. So this resource consists of an authenticated channel and a fixed simulator attached to Bob3 and Eve's interfaces of this authenticated channel. Usually, one is actually not interested in a single system, but actually in any resource that gives some guarantee. So we can consider instead the set of systems that give the guarantee that, well, if Bob1 or Bob2, so if any honest Bob reads a message, then Alice wrote it. A specification then allows to abstract away from what Eve and Bob3 might be doing at their interfaces, and simply captures any resource giving this guarantee. Using this notion of specifications, we can then capture the ideal world that we wanted before to capture. So essentially, we define one specification, capturing authenticity, so one specification is as usual restricting the capabilities of dishonest parties, and the other specification captures the guarantees given to dishonest parties. So in this case, the other specification will capture the off-the-record guarantee, namely that dishonest parties can write too. The ideal specification then corresponds to an intersection of specifications. Essentially, it then is a specification giving both guarantees, authenticity, and off-the-record. So in this paper, we show how to model guarantees given to dishonest parties in consertific choreography, and in particular, we give the first composable notions, capturing the security of multi-designated verifier signature schemes, or MDVS for short. Finally, we compare our composable notions against existing game-based security notions for MDVS, and we find that only recently the notions introduced by DumbGuard actually capture the security of MDVS. Furthermore, these notions are still strictly stronger than our composable notions. Constructed cryptography is a resource theory. From the assumed resources, you construct new ones. Recall the notion of specification, which is just a set of resources satisfying some guarantee. Then, the notion of construction just says that if you have an assumed specification R, using a protocol Pi, you can now construct a new specification, which gives you some desirable guarantee. Essentially, if you take any resource in this specification R, and you attach to this resource, this protocol Pi, you end up with one resource, which is in this new specification S, which gives you this ideal guarantee. So a construction statement holds if this subset relation holds. Let's now look at an example of a construction statement. For instance, consider two assumed specifications, a key generation authority and an insecure channel. The key generation authority generates Alice's public secret key pair and sends the key pair to Alice. It also sends the public key of Alice to Pope's verification converter. Finally, it links the public key to Eve. Whenever Alice writes some message into this sign converter, the sign converter signs this message using the secret key of Alice and sends the signature and the message to the insecure channel. When Pope's converter reads from the insecure channel, it verifies if the message and the signature are valid. So it verifies if the signature is valid for the corresponding message with respect to Alice's public key. If it is, it outputs this message to Pope. Regarding the ideal specification, we have an authenticated channel from Alice to Pope. Essentially, Alice can write into this authentic channel and Pope can read from this channel. Eve, being dishonest, interacts with the simulator and can only read from this authenticated channel. The construction statement holds if the real world specification is a subset of this ideal world specification. Now, let's see what composition means. In particular, let's assume that we have two construction statements. The first construction statement states that if you apply a protocol pi to a specification R, you construct a specification S. So essentially, pi applied to a specification R is a subset of specification S. Furthermore, let's also assume that for some protocol pi prime, if you attach pi prime to the specification S, now you construct a specification D. What composition says is that now if you apply pi prime and apply pi to R, to specification R, then you end up constructing the specification T. So essentially, the authenticated channel specification we had constructed before, here the simulator is not being shown for simplicity, can now be used as an assumed specification for a further construction, namely the construction of a confidential channel. In this construction, Bob runs an encryption converter and Alice runs a decryption converter. Alice's decryption converter first generates a key pair and sends the public key through this authenticated channel to Bob. Bob's encryption converter then stores this public key and whenever Bob wants to send a message, Bob's converter just uses this public key to encrypt the message and then writes a ciphertext into this insecure channel. Alice's decryption converter then reads from this insecure channel, decrypts the message and if it is a valid one, simply outputs this message to Alice. Of course, if here still gets access to the public key of Alice and can interact with this insecure channel regarding the ideal specification, Bob can write into this confidential channel and Alice can read from this confidential channel. Here, note that the only thing that is leaked to the simulator is actually the length of messages that Bob inputs. The simulator can, for instance, now inject new messages into this confidential channel. However, it cannot tamper with the ones that are already written there. The construction statement would hold again if now the real-world specification is proven to be a subset of this ideal-world specification capturing confidentiality. Finally, we introduce specification relaxations. Often, constructions do not hold unconditionally. When this happens, one usually resorts to making assumptions. For example, one can make the discrete log assumption, namely that computing discrete logarithms for certain groups is hard. Then, constructions make statements become bound to assumptions. For example, one would give a reduction CDL and show that for any distinguisher D and for any real-world resource R in the real-world specification R, there is some ideal resource S in this ideal specification S such that if you apply the protocol Pi to R, a distinguisher that could distinguish this from this ideal resource S would also be able to break this discrete log assumption. More formally, the advantage that a distinguisher could have in distinguishing Pi attached to R and the system S is bounded by the advantage that this distinguisher would have together with this reduction CDL in breaking this discrete log assumption. Then, the relaxation according to this function is defined as follows. It is the union of systems R such that there is some ideal system S which no distinguisher can distinguish R from S unless it would be able to break this belief-to-be-hard assumption. The type of statement we then make is that if you attach Pi to a real-world resource or real-world specification R, this is a subset of the relaxation of this ideal world specification S. More generally, one can define an epsilon relaxation as the union for each system S in this ideal specification of the systems such that they are epsilon close to this ideal resource S. Essentially, for any distinguisher, the distinguisher cannot distinguish this resource R from this ideal resource S better than some function of this distinguisher. This function could, for example, be the advantage that a distinguisher together with this reduction should have in breaking or in solving some belief-to-be-hard problem efficiently. Now, we introduce repositories. Repositories are an abstract model for communication that we use in our paper. Essentially, a repository has a set of writers, readers and copiers. Each writer can write into this repository, each reader can read, and each copier can issue copy operations. If we want to make these sets explicit in the notation for repositories, we simply add superscripts and subscripts, as we can see. Finally, we start going into our composable motions. First, let's introduce some notation. We denote the set of parties by P. Out of these parties, we consider two partitions, the honest parties and the dishonest parties. We also consider two subsets of the parties P. In particular, we consider the set of senders, S, and the set of receivers, R. As for the set of parties, we also consider the honest partitions of the set of senders and the honest and dishonest partitions of the set of receivers. For the following notions, we consider a setting with a fixed and single sender Alice. Furthermore, there is a fixed set of designated receivers, R, and there is also a non-designated receiver and dishonest party, Eve. The set of parties then corresponds to Alice, Eve, and this set of fixed designated receivers. Alice always sends messages to the same set R. Finally, we consider static corruptions. So we consider that the set of honest parties is fixed from the beginning. The statements we will make will be of the form for each set of honest parties. We define a real-world specification for the set of honest parties. We define an ideal-world specification also for the same set of honest parties. And then we make statements of the form that the real-world specification for the set of parties P.H. is a subset of the ideal-world specification for the same set of honest parties. Now let's have a look at the real-world specification for the multi-designated verifier signature schemes. So, we assume again that we have three receivers, Bob1, Bob2 and Bob3. All of them are honest. There are two assumed specifications, namely a key generation authority and an insecure channel. Alice, who is honest, gets her secret key from this key generation authority and whenever Alice inputs a message into her converter send, this converter uses Alice's secret key, together with the public keys of each party, so public key of Alice, public key of Bob1, Bob2 and Bob3, which are not shown in this picture for simplicity, and uses this to generate a signature, which it then writes to this insecure channel along with the message. As I said, each of these Bob's are honest, so in this case each of them runs their own converter. The converter of each Bob, in addition to the public keys of every party, also gets the secret key of the corresponding Bob. For example, the converter of Bob1 gets the secret key of Bob1. Whenever each of these Bob's reads from this insecure channel, the receiver converter of this Bob just checks using its own secret key if the signature that comes together with a message, which was written into this insecure channel, is valid. If it is, then the message is simply output to Bob. If, being a dishonest party, only has access to the public keys. Furthermore, it can also interact with this insecure channel. For simplicity, we will drop the notation of these braces, and from now on we'll simply write specifications without using these braces. So from now on also we'll consider that Bob3 is a dishonest party. This means in particular that Bob3 no longer runs its receive converter. But furthermore, it also means that Eve now has access to Bob3's secret key, which is linked. Now let's have a look at the specification capturing correctness and authenticity. So the specification consists of a repository with a simulator. Alice can write into this repository, Bob1 and Bob2, the honest receivers, can read. Even Bob3, who are dishonest parties, can interact with their simulator. And now, of course, the only sender of this repository is Alice. The set of receivers is everyone but Alice. The set of parties who can copy values in this repository corresponds to the set of dishonest parties, so the dishonest receivers and Eve. Now, let's see how to capture the off-the-record guarantee. Well, what we want is an ideal resource, an ideal repository, from which Eve can read, and, well, Eve is dishonest, so Eve interacts with her simulator, and to which both Alice and Bob3 can write. Now Eve's simulator does not learn who is the writer, so this is actually capturing this off-the-record guarantee, because Bob3 can very well simulate, or can write, just like Alice would. Here, we are not making a statement about Bob1 and Bob2, so these parties simply don't do anything. So now, of course, the set of writers to this repository is Alice and Bob3. The set of readers is only Eve. Now let's consider the real world, capturing off-the-record. There is an issue here, which is, well, Bob3 is dishonest, so it does not have a converter attached. Well, but in this case, we can actually make a statement about what Bob3 can do. For instance, Bob3 can run a protocol Pi that simulates Alice's writing. Okay, let's see how this looks. So, essentially, again, we have these assumed specifications, the KGA and the Insecure channel, and Alice being honest, runs her send converter. Bob3 now can execute or can run this protocol Pi, which it can use to write into this Insecure channel. Finally, Eve, being dishonest, can only interact with this Insecure channel and can also get the secret key of Bob3. Note here that Bob3 is dishonest, so this secret key of Bob3 can actually link to Eve. Finally, and again, we do not make a statement about the honest receivers, Bob1 and Bob2. So we simply attach a converter bottom to these parties. So the specification, capturing that these honest receivers can write, is this X hat, as we saw before. But now, what we want is to consider only real-world systems that give the guarantee that these honest receivers can write. This is defined by a specification, X Pi, which is defined as the set of systems such that if you would attach this protocol Pi to the dishonest receivers and you would cover up the interfaces of the honest receivers, so we are not making a statement about them, and you would attach this resulting protocol into this system R, you would end up with a system which is in this specification above, in this specification X hat, where, by definition, Eve does not learn who the sender is among Alice or the dishonest receivers. So, we have defined the specification, capturing authenticity and correctness, A. And you also have defined the specification, capturing of the record, this X specification. So now, the specification, capturing both authenticity and of the record, is the intersection of these two specifications. The construction statement we make is that the real-world specification is a subset of the intersection of the specification A and the specification X capturing of the record. Finally, let's consider the case where Alice is dishonest. So, in the real-world, if Alice is dishonest, it will no longer run a converter send. But actually, furthermore, now everyone, every honest party, also gets to learn Alice's secret key, SKA. This means, in particular, that Bob3 now receives his own secret key and Alice's secret key. Eve also receives both her secret key and Alice's secret key. And finally, Alice, in addition to her own secret key, also receives Bob3's secret key. Why do we want to capture consistency? Consider the following. So, if we do not want to guarantee consistency, then Eve, then Alice could simply run, could simply use designated verifier signatures to each of the receivers to which it wants to send a message. So, having multi-designated verifier signatures only makes sense if Alice, or if the honest receivers, are given the guarantee of consistency. And this guarantee just says that if an honest receiver gets a message M, then all honest receivers also get M. Let's see how this guarantee is captured. Again, we have a repository and now we also have a simulator attached to this repository and this simulator is attached to Alice's interface, Eve's interface and Bob3's interface. Bob1 and Bob2, the honest receivers, can read from this repository and Alice, Eve and Bob3 interact all with the simulator. Here, notice that the simulator can both read and write to this repository. So, the set of parties who can write into this repository is now all dishonest parties, so Alice, Eve and the set of dishonest receivers. On the other hand, the set of parties who read from this repository are everyone, so it is the set of receivers, Alice and Eve. So, this concludes the capture of the basic guarantees that one would expect from multi-designated verifier signatures. Other contributions we make in this paper are, for instance, we consider further settings that yield stronger composable notions than the ones we just presented. For instance, we consider a setting where there are multiple senders, multiple receivers and each sender can now choose which subset of receivers it wants to send a message each time. So, whenever a sender wants to send a message, it now picks some subset of receivers and sends a message to these subset of receivers. Still, there is this non-designated receiver, dishonest party Eve, and so the set of parties now consists of the set of senders, the set of receivers and Eve. Finally, and as before, we consider static corruptions. So, the set of honest parties is fixed. Regarding this setting, we give notions which are stronger than the ones we just presented for the fixed sender and set of receivers. And we note that, or we find that, only the notions recently introduced by Damgard et al. actually captured the security of MDVS in this arbitrary party setting. Still, our notions are strictly weaker than Damgard et al. This is all for this presentation. Thank you for listening.