 So the first one is that by definition, we're saying security, when we are defining security, we're defining it as security in the presence of an arbitrary environment. And basically it's any adversary that they can write in such a pi calculus. Because if there is another protocol that would induce an attack, then basically the idea is that the adversary could simulate this protocol. And there's one important hypothesis also when I'm writing this kind of thing, is that P1 and P2, as they are written there, they do not share any secret value. And this is important, this is generally due to the fact that the secrets are modeled using a scope operator. So for instance, typical things, if I want to be more precise, what I'm saying is that, for instance, if I have new S P1, meaning that S is a secret used in P1, which is under the given scope, which is secure, and new S P2 is secure, then I would like to conclude that new S, which is now shared between these two protocols P1, P2 is secure. And this is not true in general. Because this is now, when I'm going under the scope operator, this is not any more a process which is executed really in parallel. It's not a context anymore. So this is different from when I'm writing new S P1 parallel P2. It's different from new S P1 in parallel with new S P2, where I'm referring to two different values. This is some important difference. So there's really one hidden hypothesis when I'm saying that there's this compositional S that secrets are not shared. And so there's an easy solution, of course. One could say, well, I want to design protocols. When I'm designing them, I'm making sure that they don't, that two protocols will not share secrets. On the other hand, this is not always realistic. Again, one particular point where this is not realistic is if I'm using password protocols. Because passwords are entered by the user. There's this comic strip which comes to the conclusion that over the last 20 years, we have successfully trained everyone to use passwords. It's difficult to remember for humans and easy for computers to break. But basically, the important thing is that, indeed, our systems are forcing us to use more and more complex passwords using special symbols, having minimum length. And basically, what it comes down to is that now users reuse the same passwords for several applications. You generally have two or three passwords that you are recycling, and basically you are reusing them for different protocols. So the question we are investigating here is that if I have two protocols with the user password P, and these protocols are resistant and resistant against what is called gassing attacks on this password P, I will explain in a moment what this is. Then the question is, when I'm now reusing the same password for both protocols, are they still secure? Are they still resistant against gassing attacks? In general, I will show you that this is false. So what is an offline gassing or dictionary attack? Basically, you allow the attacker to interact with one or more sessions of the protocol. This is the first phase where the attacker interacts. And then in the second phase, now the attacker will use all the messages that he has recorded, and he will try to make an exhaustive search on the password, and check whether he can find out what is the password. So this is slightly different from what is called online gassing attacks, where an attacker would execute one instance of the protocol for each password. We are not considering them here. You can avoid these online gassing attacks with other methods like using a timeout, for instance, after each wrong guess, making these attacks unfeasible, or blocking the account after five wrong attempts to enter your password. They are the means of avoiding them. OK, so how do we model messages in these protocols? So the idea is to say that they are basically abstract terms, like in First of the Logic. So you have a signature, which is basically a set of function symbols. And then you are equipping these function symbols with an equational theory. We're just giving you some equalities. So for instance, here is an example of a signature. We have two symbols for symmetric decryption and symmetric encryption. You also have asymmetric decryption, or an asymmetric decryption for public key cryptography pk, which is giving you a public key. You have pairing and projections for the first or second component. And then you would model things like saying, actually, if I'm encrypting a value x with a key y, and I'm decrypting again with the same y, then I'm getting x again. And ever this actually a sort of random permutation, this encryption scheme, then you would also have that if you decrypt a value x with the key y, and you encrypt it again, then you would get x again. These operations cancel out, and you can model projection for pairs. And similarly, you can model decryption for public key encryption like this. And so you get an equality relation like this on the terms. OK, now what we will do is we will look at sequences of terms, and they will basically correspond to what an attacker can observe to know that she will gain while interacting with the process. And we will arrange the sequences of terms m1 to mn into what is called a frame. And this is basically a substitution here, together with a set, a sequence of secret names, which are here on the scope of n. And we are arranging them in a substitution so that the attacker can now basically access these messages using the variables x1 to xn. I know what we are interested in modeling is actually when two such sequences, two frames, are indistinguishable. And this is modeled through what is called static equivalence. So basically, we're saying that two such sequences are statically equivalent or indistinguishable from an adversary. If first of all, they should have the same domains, the variables x1 to xn, so that the adversary has the same interface when he's trying to distinguish these sequences. And the second thing is that for any term mn, which may contain the variables x1 to xn, well, if they are equal when I'm applying phi1, then they should also be equal when applying phi2 and vice versa. And I have some side conditions basically stating that m and n should not directly contain the secrets n. So the attacker is not allowed to use the secrets n directly unless he can obtain them via manipulations of the variables. So let's look at this little example here. For instance, if I'm taking the sequence where I have an encryption s0 with a secret key k, so k here is secret. And I'm also explicitly divulging k here through x2. And on the other side, I'm having a different value s1 that is encrypted. Well, then these frames can be distinguished by an easy test saying, oh, I'm just comparing the decryption of x1 with x2 with the constant s0. This test will hold on the left-hand side, but not on the right-hand side. And if I'm taking a different frame where I'm not divulging the key k, well, then these will be actually statistically equivalent. There's no way of distinguishing them under the equation theory that I presented on the previous slide. There's no equation that could hold here on the left-hand side that would not hold on the right-hand side and vice versa. So let me now give you a short example of password protocols to show you what these protocols look like. But this is the EKE protocol, which basically wants to establish a secret key r. So what I'm doing here is a is first generating a new key k, which is a secret key for asymmetric encryption. So this is for each session I'm generating such a key. Then I'm encrypting, using symmetric key encryption now, the public key corresponding to k under the password w, sending this to b, with sharing w and nothing else. b will generate a fresh symmetric key r, or some random number basically. And we'll use first asymmetric encryption, using this key that it received, to encrypt the key r, again encrypting this with support, and then sending it back. And then a and b will do a three-way handshake, basically, generating a fresh nonce, encrypting it with r, sending it to b. b will then encrypt a pair of two nonces, and a and a fresh nonce, and b encrypting it with this fresh key r. And a will send back and b again. So this is kind of they are making sure that they're both sharing the same key r. And this is actually a protocol that is secure for resistance against guessing attacks, as we'll see a bit later, where there's no way of even if I can guess the password w, I cannot be sure that I guess the right password. So these kind of protocols I will form less than in such a variant of the applied pi calculus. So I'm not going through each of it, but they will look like some short processes like this. And I'm just, we'll just give you some ideas of the semantics here informally. So what we're doing at the beginning, we're generating two new secrets, k and n a. So this is written as new k and a. This is just saying these are fresh values that are generated, and they have a scope, so they don't exist somehow outside of this program, this process. When I'm writing out of a term, this basically means I'm outputting it. And as a side condition, I'm adding this into the frame which is what the adversary will record when interacting with the protocol. And when I'm doing an input, then, well, x1 will be bound to some message that the attacker can construct from the frame by applying basically function symbols to these variables. So the attacker can construct new frames, new terms, and then there will be an input. And I also have conditionals, so I can test where the thumb of quality holds, so that I can test where I'm getting back the expected values. For instance, a non-set I previously created, like n a in this protocol. And here I'm making these tests, obviously, using the quality modulo of the equation theory. But this is just to give you an idea of the semantics of such processes. And this generates an operational semantics. So how can we now, in this framework, model what it means to be resistant against guessing attacks? So this is basically a definition that goes back to Baudet in 2005, which was already inspired by some earlier definition. And now what we are saying is that such a sequence of messages, such a frame, resists against guessing attacks if two situations are indistinguishable. First is I'm saying, well, I'm giving you phi, and I'm extending the frame with the partial w. And on the other side, I'm extending it with some random, new random value w prime. And this should be indistinguishable. Intuitively, what this means is, well, on the left hand side, I'm going through my dictionary, and I'm hitting the right password. On the left hand side, sorry, I'm hitting the right password. On the right hand side, I'm making a wrong guess, w prime. But what it says is that basically the adversary does not know if it made the right guess or the wrong guess. There's no test that allows you to verify whether you guessed the right password or not. And this is just now basically on frames. And now we can lift this to processes, to protocols, basically saying that for any execution using the operational semantics that I just defined informally, illustrated informally on the previous slide, if I can execute a, the process a to some b, then the frame corresponding to this process b, phi of b, should resist against guessing attacks according to this definition. That's the basic idea. Now the first thing is that basically if I'm using processes that don't share anything, well, then I can put safely, I can put them safely into parallel. If they're using different passwords, w i for each of the a i's that are different, well, then the parallel composition resists against guessing attacks. And this is actually quite easy to show. This is not a very difficult result. But however, if I'm using the same password w for each of the a i's, then this is not the case anymore. So basically even if each one resists separately against guessing against w, this should not be w i as a typo, well, then their composition with the same password w does not resist against guessing attacks. And there's a quite easy construction to show that this does not hold in general. Going back to the EKE protocol that I showed you previously, so basically I can do the first steps that I did before. This is one of the protocols. And then instead of making this three-way handshake, I could actually do it. But at the end, I just do an additional step as I'm sending out the password w under the secret r. And this individually is actually secure. And then I'm having a second protocol, which basically at the end is still waiting for some non-encrypted, for instance, with r. And it's sending back the decryption of what it received with r. Again, individually, this protocol is correct. But now when I'm putting them together, you can actually use the last message here as the input of the second protocol. And you're getting back the password as plain text. And so you have a trivial attack on guessing attack, because w will be in your frame. So when you're extending the frame with the right password, you can just check that this variable is equal to the variable that has been added. So there's a trivial attack. OK. So how can we avoid this? Well, there is one solution which consists in tagging actually protocols that avoid this kind of attacks. So the idea is to suppose that for each protocol, we are naming the protocol. We're having some protocol identifier. And we also have some free symbol in e, for which we don't have any equation, which is h, which you can think of as being a hash function, kind of idealized hash function, where you don't find collisions. You cannot get pre-images. So it's a good cryptographic hash function. And then what I'm saying is that if different protocols use different protocol identifiers, if I'm designing them with some unique name for the protocol, and if each of the protocols AI resists against guessing attacks on w, well, no, I can compose them. But instead of using w, each time I'm using the hash of w and the protocol identifier. So in some sense, intuitively, these protocols are now using different passwords, but they are computed from a given password, from the same given password. But using these different protocol identifiers, I can add some diversity. And this actually now composes. OK. But now, just looking at having PIDs, such protocol identifiers, it's not sufficient if I want to compose different sessions of the same protocol. Because if this is some protocol name, then all the sessions of the same protocol will, of course, share the same protocol identifier. So what we are doing, the idea here is to say, well, we will actually compute a session identifier dynamically. What we will add is we'll add somehow a preliminary phase in which we compute a session identifier. So let me be a little bit more precise now on what is a protocol to be able to talk about sessions. So if I'm having a protocol Pi here, I'm supposing that it is actually L different protocols run in parallel, which are the different participants of the protocols. We refer to them as the roles of the protocols. And now I will basically transform this protocol Pi into what I'm calling here Pi bar. And basically what everybody will do at the beginning of the protocol, it will generate a fresh nonce N1 to NL. So each role will generate a nonce. And they are making a round of nonce X changes here. So for Pi, he will expect a nonce or any term as input from each of the other participants. And it will send out its own nonce Ni. And basically then, we will use a tag constructed from these nonces, basically just pairing up all the nonces as being the tag that we will use to replace again W with the hash of this tag NW. And so now if I'm taking such a protocol Pi, which is resistant against guessing attacks on W, then I can basically take a transform protocol Pi bar, and I'm taking any number P of instances of these protocols. I'm taking P times the same protocol up to some renaming of fresh values. And when I'm putting them in parallel with the same W, well then again, these will resist against guessing attacks. So I have an effective design methodology to say, just verify now one session of a protocol. Use this transformation. And basically I know that I now can have a protocol that is secure, resist against guessing attacks from unbounded number of sessions of these protocols. Because this holds for any number P of instances of the protocol. I can directly get security for free. OK, now putting everything together, on the previous slides, I showed you how to compose different protocols. So I have some inter-protocol composition. And here I've shown you how to compose different sessions of the same protocol. So now you can put these two methodologies together. If you just apply twice the theorem, you will get some nested tag with two hash applications. And you can also slightly adapt the proofs and show that the more natural tag, we're just taking the hash of the protocol identifier. And dynamically computed hash would also work as well. Let me just give you a very rough sketch of how one proves this kind of theorem. So first, the idea is while we're doing it by contradiction. So you assume that there exists some attack trace on the tagged protocol. So I have a guessing attack on this trace. Then I will say, OK, let's now look on this attack trace. What are the different tags that have been computed? T1 to Tk. So there's some protocols, some roads that will have the same tag, some ones, some roads that will have different tags. But at least I will look at all of them. And I will group them together according to whether they have the same tag, whether they coincide, or whether or not, or whether they have different tags. So I can group them into buckets of roads that have computed the same tag. And one of the things is that as each role is actually entering a fresh nonce, this ensures me that each of these buckets has exactly one instance of each row, at most one instance of each row. So they might be each one in a separate bucket, but I know that in one bucket I can never have twice the same two instances of the same role. And next, what I'm showing is that actually when I'm having a tag that is based on TI, HTI of W, I can actually replace it by a simpler tag, replacing TI with some constant, SIDI. It's just some fresh constant that I'm using. And this will actually give me a similar, nearly the same trace, which is also executable, and which still has a guessing attack. But now this SIDI is coming somehow magically shared. So this is completely independent of the previous nonce exchange, a replacement like this. So it can actually completely chop off this preamble that I did to compute a nonce, as long as they are different. And then I can basically use my distroint composition result, which is now saying that actually as I'm now here when I replace a user fresh Wi and just some constant, I can use the result that I had on distroint composition for protocols, conclude that there must be some guessing attack on one individual protocol. And lastly, I have shown that actually adding tags does not add any guessing attacks. So there must have been some guessing attack on the initial protocol giving me the contradiction that I wanted. So as a conclusion, so we have shown basically how to transform these protocols using tags and to make them compose nicely in the sense that if I have different protocols, if they are using entering, if they are adding the name of the protocol, then they will compose in the same order for sessions. I can dynamically compute a session tag and obtain compositional guarantees like this. So basically, we can now just take our favorite tool, check one session of a protocol, transform the protocol and conclude that it will actually resist against guessing attacks, even if executed in an environment with other protocols. One thing that I should say is now I have shown this composition for resistance against guessing attacks. But this is generally not the aim of a protocol. If you're running a protocol, you want to have authentication or secrecy or things like that. Resistance against guessing attacks is just a way to ensure it. So actually what we need to do is to show this kind of composition for other properties. But for instance, for properties like authentication or compensation, it actually follows from any trace property. You should maintain them. So this is still ongoing work, but mainly the most tedious thing is to specify exactly what you mean by authentication. But this should be quite straightforward. A much, much harder thing that we would like to do is to have a composition of more general equivalence properties. Like on the very first slide where I was saying, you can express security as a property is indistinguishable from some specification, for instance. If you want to compose such general equivalence properties, this is much, much harder. OK, thank you.