 Thanks for the introduction, where's my slide? Okay, so for this talk, I'm going to present a new definition framework called indistinguishability up to correctness, INDC for short. So just in one sentence, it is a new technique for you to write security definitions or to design security games in such a way that you don't need to attend to travel wins. Or more specifically excluding travel wins from the adversary. Hence the title, Simplifying Game-Based Definitions. So let's first start with a review about indistinguishability, INDC for short. As we are all familiar with, indistinguishability is about an adversary advantage notion that measures how good he is in distinguishing a real game G from an ideal game H. In this setting, an adversary A would ask questions to one of the two games and receive responses from the games. His job is to differentiate which games he is interacting with. So the indistinguishability advantage of an adversary A against games G and G and H are defined to be this probability difference that he outputs one in both worlds. So this concludes somehow, okay. So based on such an adversary advantage, a game G is called indistinguishable from an ideal game H. If we're all adversary with reasonable amount of resource, his distinguishing advantage is small. This is all very familiar. But I want to emphasize is that indistinguishability won't be so useful and so important in our field if this slide is all what it's about. If you go through the literature, most of the definitional work uses indistinguishability in the following way. The games G and H are really constructed in such a way that it depends on some crypto schemes pie. And then people would define all kinds of security advantage against such a scheme pie as the IND advantage of the games G and H that are instantiated with this pie. There are this talk I'm going to call this way of defining security as the conventional way. So let's first go through some examples of what conventional games look like in the literature. So here is a game that defines key indistinguishability for bi-directional radiated key exchange. The relevant talk was given right here by Poitering two days ago. Here is another one that's about that model's integrity property of a primitive called data stream channels. It is a joint work done by Fisherlin at all in crypto 15. Yet another one. That model's circuit hiding property of onion encryption, by the way, onion encryption is the encryption algorithms used in any routing network like tour and mixed nets. This is a recent work done by DecoBrilli and Stam. I believe you Eurocrypt 18. All these games are really complex. So maybe let's switch our mind and start with some games that look not that complex. Here is a game called all sub I. It models the authenticity property of stateful authenticated encryption. The same topic as mine. So if you look at their secret, it is a joint work done by Boyd-Hale-Boysnitz, the media in CTRSA 2016. If you look at their code, it doesn't look that bad. At least it's comprehensible. However, just within one year after their first publication to CTRSA, the same orders published a revision in which they revised the pseudo code of their security games to e-print in which they added this special processing to one of their sub cases. And even this revision is still not correct because there really should be a return R statement between line five and line six. I believe it's just a typo. So all these stories and complex games motivated us to think about what are the essential problems about indistinguishability paradigm? So as you can see from the previous lines, the definitions or the security games related to those are really complex and subtle in such a way to a degree that they're just hard to debug and believe. Worse, a prior work of Larry Hoffman's and Kiehl's showed that even with the most basic security definitions like IDCC security for public encryption schemes, people still would mess up and they are vague in terms that they are vague of how travel queries from adversaries are disallowed. The others showed that just by tweaking the way that travel queries are disallowed, you can define multiple unequivalent security notions for IDCC security for public encryption schemes and key wrapping algorithms as well. In summary, it's just hard to justify for a cryptographer once he writes down some pseudo code of security games for some security properties. It's just, it's hard to justify for him, does this game really capture what I want? Part of the reason for that is there's no good theory for cryptographers to use as how to create security definitions. In the literature, we have many good tools and theories to upper bound adversary advantage given existing security definitions, security notions like coefficient H methods, expectation arguments, for example. But as for how to create that security definitions, typically we're on our own. So our IDCC framework is supposed to fill in this gap. In high level, it works like a definition compiler where you feed into this compiler two security games that capture what you want, but really do not work because there will be travel winning strategy for the adversary. So the definition on the left hand side is bogus. However you live with that, you pass these security games to the compiler and it will automatically generate two edited games for you whose IND advantage results in relevant reasonable security notion in which there is no travel win. So what is this definition compiler then? It's a process we call Oracle editing. It works like this. We start with two utopian games, G and H. The games are called utopian because there are travel wins to win this, there are travel winning strategy for the adversary to win these games. We again provide, we in addition provide a Cragness class C that is mathematically just a set of cryptos games of which satisfies certain Cragness conditions. The Cragness condition is just a familiar functional property that a class of schemes need to satisfy in order to work. For example for public key encryption this would just be the decryption needs to be the reverse process of encryption. You pass these three building blocks to the process of Oracle editing and this will output the edited games G tilde and H tilde. We then define a new adversary advantage called INDC advantage in the middle at the bottom against G, H and C. Simply to be the plain indistinguishability advantage against G tilde and H tilde. The same advantage would then use to be what happens security advantage against the underlying scheme pi. Okay, so what is this Oracle editing process in particular? In order to illustrate this I would first go deep into our game playing model as shown here. Security games in our model consist of initialization procedure, Oracle procedure and finalization procedures. These are written in pseudo code and they are stateful. They have states so that the game states are maintained across invocation. An adversary A would simply ask questions to the Oracle procedure receiving responses and then he would add his will output his own output Z which usually equals to the game outcome omega. This will be what the utopian games behave like. Like I said, there needs to be added done. So what is this editing? Well, we added the utopian games by adding this yellow demultiplexer. This demultiplexer will, before each time the response is given to the adversary, computes a silencing function psi on the current game transcript. The game transcript includes all previous adversary queries and responses. And if this silencing function returns true, the response of the Oracle YI would be replaced by a special diamond symbol. We call this Oracle silencing. Okay, so now comes the real crux of our whole framework. How should we define this silencing function? Because this really captures what we mean by trivial queries. So we think that a query is trivial if the adversary based only on the current game transcript and based solely on the fact that the underlying scheme is in the Cragnitz class C knows the answer beforehand. So let me repeat this, this is very important. We think a query is trivial, so it should be silenced. When an adversary based solely on the current transcript that includes all his queries and responses at this point and based solely on the fact that the scheme is in the Cragnitz class knows the answer beforehand. Formalizing this idea, we define a silencing function in this way that given T, the answer is fixed across all schemes applying to class C if the adversary interacted with the real game. We give definition formulas for the silenced functions. I don't want to go deep into this, but I only want to point that the silencing function here is a logical or of a fixed predicate. That means whenever there were prior queries that it needs to be silenced, it remains so. So it's like a silenced shutdown approach. This concludes the description of NDC, but before I leave this topic, I need to mention one important caveat, that is the silenced function needs to be efficiently computable. Or at least on the domain that matters, transcripts that arise in G sub pi or H sub pi. If this does not hold, the intuition that because the adversary knows the response so he should not ask it, simply would not hold. Let's summarize how we use NDC to create the definitions. We first formalize syntax of schemes pi, then the correctness condition. This will give us a correctness class. The same step as the conventional way. We next design utopian games G and H. And in doing so, we don't need to attend to logic for excluding trivial winning queries, for example. Along with C, this determines the INDC security notion we want. Finally, we need to verify the silenced function, silenced sub CG is efficiently computable on the relevant sets of transcripts. Okay, so that is the conclusion of INDC framework. Let me go through two examples for that. The first example is, let's use it to define INDC as security for public encryption schemes. A very familiar notion. I'm going to perform this definitional process first in the conventional way, and then by our INDC way. As we're all familiar with, a public key encryption scheme consists of two probabilistic key algorithms, key generation and encryption, then a deterministic decryption algorithm. The correctness property is just that for all messages, if you're encrypt by a public key generated by the key generation algorithm, and you decrypt that, you would get back the original message. All these are very simple. So let's try to design security games for INDC security. This is the first attempt. Both the real size G1 and the ideal size H1 share initialized key decryption finalized procedures that have natural semantics, except for the encryption article where in the real side you encrypt the real message, but in the ideal side you encrypt all zero bit messages. If we are working in a setting of conventional way, I'm sorry, if we are going to give conventional indistinguishability based definitions, we know that this is not enough. There are ways to trivially win this game. Adversary simply asks encryption M, gets back C, then he decrypts C. So that by correctness in the real world he would see the original message M, but in the ideal side he would see the all zero bit messages M. Traditionally there are multiple ways to exclude such trivial wins. You can either exclude from consideration, exclude from consideration all adversaries that make such trivial queries, or you can first allow such trivial queries, but in the finalized procedure you penalize that behavior by returning zero. Valarie, Hoffman's and Kiehl's called the first one exclusion style, the second one penalty style. Okay, this is the conventional way. What about our INDC way? We need to have first syntax, correctness condition, which we already did. We need to have utopian games G1 and H1. This looks the same as the first attempt. And the difference with the previous slide is there is no logic for excluding. There's no code that attends to the exclusion of the trivial queries. All the security code here looks very natural. And that's it. C1, G1, H1 define an INDC security notion and we've shown that the resulting INDC security definition is equivalent to the conventional one. Okay, you might say this doesn't look very promising. You only manage to remove two lines of code. So let's go to our main example of state authenticated encryption. There have been many prior work on this. I've listed three of them. As you might imagine, they tend to have different syntax, different security goals, and they have relatively complex security games. That's most important. So let's try to use INDC on stateful authenticated encryption. First, the syntax. We simply augment the traditional authenticated encryption syntax by a state space, both in the input and in the output. And the next, we need to define correctness. Somehow, surprisingly, this question to the best of our knowledge was not answered explicitly in the literature. We choose it. If you think about the question, what is a correct stateful authenticated encryption? So if you apply that over a reliable channel, things really matter here. If you apply that over a reliable channel, then you might only require the receiver to decrypt correctly. Just for the ciphertext that's receiving order. But if you are using stateful authenticated encryption over unreliable channel, then you might need a stronger correctness requirement. You might require the receiver to decrypt correctly for any out-of-order ciphertext, except for replay, for example. Between these two, there can be many variants about this level of fidelity for the receivers. We choose to model that by a level set L. This is just a set of natural number sequences so that a number sequence is in this level set if and only if it is considered a permissible ordering of the ciphertext generated by the sender. At the bottom is our formalization for this correctness class that's parameterized by this level set L. The next step is to give real and ideal utopian games. As you can see, compared to prior work, our security games are much more simple thanks to the INDC because we never, we no longer need to write out explicit logic to exclude those trivial winning queries from the adversary. We have a safer AE construction that satisfies the resulting INDC CCA security notion as well. So finally, let me conclude with some of the possible variants for INDC framework. All the, so everything I just talked about actually is about one central question. How do we formalize the silencing function to reflect the idea of excluding trivial wins? There are many definition choices apart from our silencing function, which is a silenced shutdown style. We can instead silence ones, but allow additional queries. There can be ideal side editing. We don't silence the real rules, but instead we added the ideal world responses by the real world ones. There can be penalty style editing where you simply modify the finalization procedure to penalize whenever in the transcript there is a fixedness query. There can even be symmetric silencing. Let me skip this, but I want to mention that all of these four or three of them we show that they are expressively, they're equivalently expressive as our initial version. However, convenience does come with some price. First, definitions coming out of INDC are abstract. In terms of the edited games do not have concrete security code. However, most of the time it can be concretely recalculated so that you can do the conventional security analysis against it. It is still a speculative proposal. We've only used it on any encryption, public key encryption, and state-authenticated encryption. But we expect the idea can be broadly applicable and there definitely needs to be more work done on this topic to understand how general this idea works. And that's all for the talk. Any questions?