 Hi, I'm Sara Vorotnjak and I'll be presenting MAS, the Modular Security Specifications Framework. This is joint work with Professor Amir Herzberg from the University of Connecticut, Amir Lebovich from Bailan University, and Professor Avaseta from Trinity College. We know that Prusa are critical to ensure security and we've been proving the security of well-defined cryptographic primitives like serenum functions and encryption schemes for a long time. For example, probabilistic encryption by Goldwasser and Michali was published already in 1984. So, provable security is well established and we also know that proving the security of a primitive is not enough to guarantee the security of a whole system or protocol using the primitive. And there have been many failures of systems which were not proven secure. So, ideally, we would like provably secure protocols as well. But unfortunately, many implied protocols are not proven secure. And admittedly, proving a security of applied protocols is not easy. We can make subtle mistakes and the protocols themselves and the specifications, which include assumptions and goals, can be quite complex. And in addition, defining specifications which accurately reflect real-world environment assumptions, for example, about communication and synchronization, is not trivial. So, altogether, this can discourage people from trying to prove the security of protocols. In order to prove the security of protocols, though, we need to define what we're proving. So, we need to define the protocol specifications. And there are a few existing approaches to protocol specifications. The first one is the informal approach in which we list the specifications as descriptions and words. And your specifications include the assumptions and goals. And we use models to refer to the assumptions. So, we have adversary models like men in the middle, communication models, like reliable communication, synchronization models like boundary of clocks, initialization models like shared keys, and so on. And we use requirements to refer to the goals. So, we can have generic requirements, which can apply to multiple protocols, like indistinguishability, and we can have more specific requirements like authenticated broadcasts for a broadcasting protocol. So, going back to how we can define specifications, the first way is to write a list of descriptions of the assumptions and words and on the list of the goals. And this separates the assumptions from the goals, but it doesn't facilitate provable security. But this is the approach that we're used to and which is used in systems papers, where the models and requirements are informally defined properties in English, which is natural and easy to understand for people. But informal arguments can be misleading and overlook things, and it can be difficult to tell if they're correct. And if each paper defines their own specifications, it can be hard to tell if they're comparable. Second is the game-based approach, and which we define a game for each goal are for multiple goals, and the game includes the assumptions as well. So, the assumptions and goals are combined, and the game is kind of monolithic in the sense that all the assumptions and the goals are defined together, or even all the assumptions, all the goals are defined together in one game. But it does allow provable security, although we don't have general compositions. The third approach is the simulation-based approach, where we show that the protocol is indistinguishable from an ideal system or an ideal functionality in UC. And ideal functionality is a description of what the protocol should do as an ideal machine, so it specifies what outputs it should ideally have. And this is also monolithic because all the assumptions and even all the goals, too, are defined together in one ideal functionality. But it does allow provable security, and we can also have secure compositions, which is an important feature of UC, both for applied and theoretical protocols, because we often build protocols out of smaller pieces, or we design a cryptographic primitive using another cryptographic primitive, so we want to be able to prove, based on the security of the smaller pieces, that the bigger system or protocol is secure. And UC has secure compositions. And finally, when this work we present MOS, which is game-based, and it uses predicates, so we have model predicates and requirement predicates. And in contrast to the typical game-based approach, MOS has separate models and requirements. So it separates the models and requirements. And notice that this is similar to the informal approach, where we can list the models and requirements. But now these are formal models and requirements, which are well-defined using predicates, not descriptions in English. And we can prove security. And currently, we do not have composability, but we have some intuition in our optimistic that it can be done. So what is MOS? MOS stands for Modular Security Specifications Framework, and it's a game-based framework with well-defined formal modular specifications. And it's in game-based, because we found it more convenient to use game-based definitions. But there is an interesting question for the future of whether we can have modular simulation-based specifications as well. But MOS has game-based, modular, and separates the execution process models and requirements. So these three components are independent. And models and requirements themselves are modular also. So we can, for example, combine models together easily. And it'll be more clear what I mean by combine after, I believe, after I define models and requirements in MOS. But notice that this modularity has several benefits. You can have gradual development of specifications, which can be useful because oftentimes we first simplify a complex problem and then gradually make it more realistic. So we can, for example, start with a stronger model and prove a weaker requirement and then work towards a weaker model and prove a stronger requirement. And maybe it was intentionally simplified or the assumptions and goals changed over time. So it can be convenient to reuse previous definitions and only change what's needed. And we can reuse models and requirements across works to save time and space and make the results easier to compare, which may not be so easy with even monolithic games, ideal functionalities in UC, and even more with the informal approach. So now you know what MOS is. Let me tell you a bit more about how MOS works starting with the execution process and then models and requirements. So this is a little diagram of the MOS execution process. And the execution process in the gray box interacts with the adversary at the bottom and the protocol at the top, which are both represented as stateless functions. And the execution process maintains your state and then gives them their state when they're invoked. And we give the parameters, params on the left to the execution process, which can include security parameters, the key length and other parameters. And then the execution is run and it's controlled almost entirely by the adversary. So the adversary can choose the number of entities, inputs, operation, clock values in each round, and then can see the outputs. The execution is basically a sequence of events. And in each event, the adversary interacts with the protocol and chooses the input operation clock values. And the protocol is run with these inputs in the entity state and returns an output and a new state. And the output is given to the adversary. And this can repeat for as many events as the adversary wants. And then at the end, the execution process outputs a transcript P, which contains values from the execution. So the adversary can do a lot. And this is intentional because we don't enforce models as part of the execution process. Instead, we define models using predicates over the transcript. So we can limit what is allowed in the transcript. And we can this way, we use the execution process in their front works, which makes writing and understanding works easier. And contrast to, for example, the typical game-based approach to where the execution process is also defined as part of the game. So, and also the execution process is simpler, which is one of our goals that we want it to be understandable. And we also have a formal description of the execution process with pseudocode. And it's not that complex, but you can see it in the paper. I just want to draw your attention to lines 12 and 13, where you can see that values are saved to the transcript T and then the transcript is returned, which includes the entities, operations, inputs, clock values, entities, states, and other values. And then you can look at these and place well-defined restrictions on them and models and requirements. So we've talked a lot about models, but they may be abstract. So let's look more closely at what we mean by models in MOS. So first, on the left, we have examples of models, which we can define. And it's kind of like a menu where you can pick and choose the ones that you need. And each model is clearly independently defined piece or item on the menu. And then they're defined using predicates. So you can have adversary models, like man in the middle, Byzantine, threshold, can have communication models, like authenticated communication, reliable communication, others can have clock models, like boundary drift or synchronized clocks. You can have secure key initialization models. And then we enforce a model by looking at the transcript of the execution and checking if the adversary satisfies the model based on the transcript. And so I'll go over definitions more in a moment, but basically when we assume a model, we consider only adversaries that satisfy the model. So only adversaries for which the transcript satisfies the model with a sufficient probability. Models are independent of the execution process and requirements. And they're also independent of other models. So a full model and a synchronization model can be completely separate. And we can also include multiple submodels in a model. So they're easy to combine and we can reuse the definitions of models across systems and works. More specifically, a model is a set of pairs by Betel, which recalls specifications. Or Pi is a predicate and Bet has the base function, which specifies the probability, which is allowed for the adversary to win against the predicate. And since models are set, so we can combine them by taking their union. And the reason for the base function is that we might want different predicates to hold with different probabilities. For example, confidentiality, we might want to allow the adversary to win with probability one half. And for authenticity, it might want to allow two to the minus L probability if we're using L bit tags in the protocol. And for others, you might just want negligible or zero probability of winning. So then we can just use the constant zero. And we can define these probabilities easily in the base functions. So how do we define an adversary satisfying a model? For the case where the model includes just one specification, we would say the adversary a satisfies model M, which includes this one specification Pi Beta. If for every protocol, the probability that the predicate over the transcript is false is the most negligibly greater than the base function. And T here is the transcript output by running the execution process with the adversary A and protocol P. And I mean by negligibly, I mean negligibly in the length of the parameters, perhaps. And then if there are multiple specifications, we can just check this for each one of them. As an example, here's a model which assumes bounded jerk costs. And the probability function is the constant zero here. So the adversary should not break this ever or only with negligible probability. And the predicate, an algorithm one checks that the local clock values, which are set by the adversary and execution are always within delta clock of the real time, which is also set by the adversary. And also checks that the real time is monotonically increasing over the execution. And so it is a simple and small and reusable. And there are many situations where you might want to use this model to assume bounded drift clocks. Okay, so let's look more closely at requirements. On the left, we have examples of requirements we can define. Again, it's a nice menu. And we can have generic requirements can be used for different protocols, like in distinguishability, no false positive verifiable attribution. You can have PKI requirements. And this work actually began after trying to use the game-based and simulation-based approaches for PKI specifications, which turned out to be impractical. So we designed MOS. So you can have accountability, transparency, privacy requirements. And we can also have other specific requirements for specific protocols, like, for example, authenticated broadcasts for a broadcasting protocol. And requirements are defined just like models. And then we check that a protocol satisfies a requirement under some specific model. And we basically consider adversaries which satisfy the model. And again, we look at the transcript to see if the requirement is satisfied with revision probability. And since the adversary controls the execution, then by checking that the adversary satisfies a model, we are effectively checking that certain assumptions are held during the execution. So this is how we specify requirements for a protocol under a model. And requirements are well-defined using predicates, like models. And they can be compared and can include sub-requirements also. And the generic requirements can be reused. But just like for models, a requirement is a set of pairs, pi by telemetry called specifications. So these are these pairs of predicates and probability functions. And for the case where the requirement includes one specification, we say that a protocol P satisfies requirement R, which includes this one specification under model M. Therefore, every adversary A which satisfies M, the probability that the predicate over the transcript is false is that most negligibly greater than the base function. And T is, again, the transcript output by the execution with A and P. So as an example, here is a delta transparency requirement, which is a BKI requirement. And intuitively, we say that a certificate attested as delta transparent must be available to all interested parties within delta time of its transparency attestation being issued by a proper authority. Below is the delta transparency predicate. Notice that this predicate is more complicated than the clock drift one, but we can use sub-predicates in this predicate. So we have one to check that an entity is honest, one to confirm public key, one to verify a certificate attestation. So you can also have modularity and predicates, and you can use and reuse the smaller predicates. For example, you can reuse the honest entity predicate and other PKI requirement predicates. So as I've been saying, in MOS, we support modularity of models and requirements. And we have some lemmas, which basically formalize the intuitive modularity properties you thought you might expect. So we can know some properties about combined or stronger or weaker models and requirements. And I'll just talk about two of them here. The first one is the requirement model monotonicity lemma, which says that for any two models where M is a subset of M hat, so M hat is a stronger model and M is the weaker model, and any requirement R. If the protocol satisfies the requirement under the weaker model, then it must satisfy it under the stronger model. And the second lemma is the requirement union lemma, which says that for any two models M, R and prime, and any requirements R and R prime, if we take the combined, the union of the two models and the union of the two requirements, and if the protocol satisfies their first requirement under the first model and also satisfies the second requirement under the second model, then it satisfies the combined requirement under the combined model. And you can see the paper for more lemmas and proofs. So let me describe an analysis example using Moss. So we have these separate focus models and requirements, and this facilitates gradual protocol development and analysis. And as an example, we analyzed a simple authenticated broadcast protocol, which I call P here. And we did it in three steps under three models. So in the first step, we use the model that assumes security sharing and initialization assumptions. And then we showed that the protocol ensures authenticity under this model. And then in the second step, we assumed also a bounded clock drift. So we didn't define a whole new model, we used the union of the model from the first step and the bounded clock drift model. And then we showed that the protocol achieves freshness also. So authenticity and freshness under this stronger model. And then in the third step, we also added assumptions about bounded delay communication and the first interval and showed that the protocol ensures correct bounded delay delivery of the broadcast messages under the stronger model. So let me mention three additional features of Moss, which I didn't discuss so far. The first one is that the execution process can be extended. So we define a set of execution process operations, which we call X. And then we can add operations which the adversary can use to this set, which adds additional functionality, which the basic execution process doesn't provide our support. For example, entity corruptions, confidentiality and shared keys. And one reason for this design is that it keeps the basic execution process simple so that it can be understood by people not used to formal models and definitions. And sometimes we don't need the extra functionality, like for example, could have added confidentiality to the basic execution process, but it's not always needed for different protocols which only care about authentication and not confidentiality. So we only left this for the full execution process using these X operations. And the full execution process includes everything that the basic execution process has. Second, Moss also supports concrete security using a small extension to the execution. And concrete security definitions for model satisfying requirements satisfying are quite similar to the asymptotic ones, but they include concrete bounds. And this is important to provide support for applied protocols so that we can have concrete bounds and concrete values of parameters. And third, we can also ensure polynomial time execution. So because we have this problem, that even if the adversary and protocol of polynomial, the total runtime might not be polynomial because over the execution there's this interaction and the outputs become inputs. And so the inputs can keep growing and eventually exponential time for the adversary or protocol in the size of the original parameters over both of them. And this should be prevented for asymptotic analysis. And we have considered this in Moss. Moss does have some limitations and we have some areas for future work. So the first one is that currently we don't have composability like you see does. And this is a useful feature for protocol design and analysis. And actually, composability also provides modularity, but it's modularity for protocol design rather than for the specifications. So it'll be great to have this in Moss also. And as I said, we have some intuition and optimistic that it can be done. And the second area is that Moss is game based, but some definitions seem to require simulation like zero knowledge. And as I said, there is the open question of whether we can have modular simulation based specifications also. And these could be modular specifications and simulation based frameworks and or extending Moss to support simulation based specifications. And third, we don't have automated tools for computer aided analysis. So these could be developed either for translating Moss specifications to other forms supported by such tools and or tools for Moss directly. In conclusion, Moss facilitates modular security specifications for protocols. It allows portable security for practical protocols, which may assume complex models, which might include communication, synchronization, faults, and other assumptions. And it allows the reuse of models and requirement definitions across protocols and acrossworks, which can also make it easier to compare works. Of course, please see the paper for details and proofs. And thank you.