 Hi, my name is Julia Hesse. I work at IBM Research in Zurich. And I will be talking about the irreplaceability of global setups or how to not use a global ledger. This is joint work with Christian Badertcher of IOHK and Fazelistikas of Purdue University. Our work is about pitfalls in analyzing the security of protocols that run in concurrent systems. So let me first give you some context. When we define security of a protocol between Alice and Carol, we ideally do not look at the protocol in isolation, but we want to capture also the context of the protocol execution. For example, we want to specify how the adversary can influence the run of the protocol. For even stronger security, we want to also consider the protocol's context. For example, the behavior of other protocols that are accessed by ours, for example, the public key infrastructure, and even completely unrelated other protocols that might impact our protocol run. For a protocol run over the internet, the more context we capture, the more relevant our analysis will become. Now, essentially, we have two options on how to actually define security of a protocol. The first is so-called game-based security notions, which are good for checking specific properties of a protocol. For example, the inability of the attacker to produce a signature forgery. The second way is simulation-based security or simulation-based notions, which are good for specifying the expected behavior of a protocol in arbitrary contexts. In this talk, we focus on simulation-based security. Let's first do a quick recap of simulation-based security. Simulation-based security is a real ideal notion in which a real protocol run here on the left-hand side of protocol row has to look like an idealized version of it. You see the real execution on the left and the ideal one on the right. The idealization is captured by a so-called ideal functionality F, which you can simply imagine to be a code run by a trusted party which takes secret inputs of parties, computes protocol outputs from the inputs, and then securely hands parties back their outputs. The protocol row is set to securely realize such functionality F if all attacks that can be conducted against an execution of row in the real world can be simulated in the world where the ideal functionality F is executed. You can see here that the simulator takes the place of the adversary in the ideal world, which is why the simulator is often also called the ideal world adversary. The simulator needs to simulate essentially the effect of an execution of row, also on other protocols, for example, the PKI to ensure that the state of the PKI is not distinguishable from, well, essentially the counterpart of the PKI in the real world over here. The distinguisher who tests a simulation is allowed to determine the inputs of the parties. It's also allowed to see the outputs and, most importantly, to interact with the adversary, which is A in the real world and the simulator in the ideal world. For example, the distinguisher might ask the adversary about the state of the PKI. The protocol is said to be secure or securely realizing the functionality F if the distinguisher cannot tell who he runs with. So either he runs with the real protocol row and the real world adversary, or he runs with the ideal functionality and the simulator. To define security of our protocol in a simulation based framework, we need to specify an ideal functionality that captures the security and functionality of the primitive in an abstract way. I did this exemplarily here for the case of commitments. The this commitment functionality of COM essentially carries out, well, an ideal commitment scheme as it would be performed by a trusted party. Essentially, it records a message from Alice and whenever Alice would send an opening command, it would send the message to Carol. The functionality also specifies which attacks on the commitment scheme are unavoidable even in an ideal world by providing this adversarial interface here to the adversary. As you can see, we define a secure commitment by only notifying the adversary essentially that the protocol steps took place. Most importantly, we ensure that the commitment is hiding by not leaking the message to the adversary and by also not letting the adversary influence the message that is output to Carol. It should be fairly obvious that any extension to this adversarial interface makes the simulation and hence also proving easier, but at the same time, it of course weakens a security statement. This relates to the following artifact of simulation with security. A protocol row that securely realizes this commitment functionality from the last slide also securely realizes an insecure, for example, non-hiding commitment functionality where the message is leaked to the adversary. This is essentially because the simulator from the former statement also works for the letter. The simulator will simply ignore knowledge of the message that it obtains from the functionality. This means that if a protocol row securely realizes the commitment functionality f-com from the previous slide, then it also securely realizes any functionality that is strictly weaker than f-com. Of course, the letter statement is not very interesting because it's quite weak, but it demonstrates that simulation-based security is what we will in this talk call an at least a secure notion. We demonstrate that the protocol is at least as secure as the ideal functionality. I also want you to note that the gap between both sides might be arbitrarily large. Simulation-based notions have a great benefit over game-based notions. They come with composition theorems. So let me show how such a theorem can be applied. Assume you designed a voting protocol row which uses a commitment scheme as a building block. You conducted a security analysis which demonstrates that your protocol securely realizes the ideal voting functionality f-voting. And in this proof, you abstracted from the commitment scheme that is used by the protocol by simply using this ideal commitment functionality from our slide before. Now, what you also do is you search the literature and find, for example, that Peterson commitments securely realize the ideal commitment functionality. Now, the composition theorem tells you that you can essentially combine both proofs while preserving your original statement, namely that row is a secure voting protocol. So this is what we call or would call like a modular security analysis and it essentially saves you from implicitly proving security of partisan commitments while you prove security of your voting protocol. Now, remember what I said before about secure realization translating to at least as secure. This is actually a key component in the proof of a composition theorem. On this slide, we have exactly the same setting as before with voting protocol row which uses a commitment protocol as a building block. So the commitment protocol pi is securely realizes f-com which translates to pi is at least as secure as f-com or pi admits less attacks than the ideal commitment functionality. That means that we can, if we replace the commitment functionality in our security statement and by the protocol pi, the adversary a prime here becomes restricted because it can essentially mount less attacks against pi than it could here against f-com, right? And for this restricted adversary, it seems quite well, quite intuitively that we can find a simulator which we now can essentially combine from this simulator here and the simulator for the commitment here. So it's really the restricted number of attacks compared to the original security proof which lets us in general replace protocol building blocks by more secure realizations. That essentially concludes our recap of simulation-based security and composition CRMs. Our work is about replacement of specific types of protocol building blocks that are commonly called setups. So you see a setup here. They are usually abstracted as ideal functionalities and can be called by, well, essentially the parties running the protocol, Alice and Carol here. You might be familiar with some of them, for example, publicly infrastructure, a bulletin board, a common reference string or a random oracle. There's essentially two ways of modeling protocol setups. First, there's normal or local setups that are exclusively available only to the protocol that we consider. So only to the protocol row of which we want to prove security, only this protocol can access this ideal CRS functionality, common reference string functionality. Since the state of the CRS, for example, at which point it is set or how often it was retrieved by the parties is only manipulated by the protocol row in the real world. In the ideal world, the simulator now essentially has to simulate the impact of only row on the CRS functionality. But this is actually a great relief for the simulator because it can now completely control the CRS. It can essentially choose it freely, it can even plant a trap door. And essentially such local setups, they really never make the life of the simulator harder and even often they make it way easier to find working simulators. In many concrete simulation-based frameworks, there exist various impossibility results about functionalities that do not even have secure realizations in the absence of any local setup. Commitments are one example they require usually at least a CRS to achieve simulation-based security. So as I said, this paper is about setups but it is really about replacing setups by another protocol. On this slide, you could for example, ask whether it's possible to replace the ideal CRS functionality, FCRS, which essentially chooses a string at random and gives it to everybody who asked for it. You could ask whether you could replace this trusted building block with an interactive protocol that lets Alice and Carol somehow agree on a truly random string, right? And the answer is actually already given by the standard composition CRM that I showed you before. We can safely replace the local FCRS by any protocol that securely realizes it just as we did for an ideal commitment building block. So a local setup is really nothing else than a normal abstract protocol building block and its replacement is possible using standard composition. There's another type of setup which is the global setup. I will denote global setups with this curly letter G on the slides. And the difference to a local setup is that global setups can be essentially accessed by other protocols as well. So consider for example, the public key infrastructure that is used for both digital signing and for secure communication or a bit more subtle a CRS that is used by many concurrent executions of the very same protocol role. So the definition of a global setup is really that it is not local to one protocol. Considering our list of examples for setups before, we can see that essentially in reality most of them are likely to be reused by many different applications. So it is essentially too expensive to create an individual copy for each protocol run. And even more, all these setups once we deploy them in practice they do not even have any control over how users use the information that they provide. For example, if you consider the PKI a user will essentially registers her key pair but how the user afterwards uses the secret key the PKI has essentially no influence on that. And I mean, I probably don't even have to mention the number of applications that have been proposed to run with the, well, essentially if you ignore forking the single Bitcoin blockchain. In this talk, we are going to focus hands on the following question. Is there any way to replace a global setup in a security statement by an interactive protocol? Let's first see why that might be more difficult than with global setups, which as we have seen can be replaced using the standard composition CRS. So here is the topology of a simulation based security statement where the CRS is modeled as a global setup. First, we see that now we have arbitrary other protocols accessing the CRS here. And in the simulation, we do not get to completely make up the CRS as we could before when the CRS was local to the protocol. The fact that it might already have been retrieved by other protocols means that we cannot plant a trapdoor anymore, right? So in some sense, the CRS might pre-exist and it might be well in an arbitrary state. So when going from local to global, we clearly make the life of the simulator harder and consequently the security statements proven with respect to global variants of the setup are usually stronger than statements proven with respect to local variants. Okay, so now let's look at what happens if we attempt to replace a global setup. Let's say we designed a protocol for online poker and have proven that it securely realizes some ideal functionality of poker, assuming the parties have access to a global ideal ledger functionality. We also know from the literature that for example, the Bitcoin blockchain securely realizes our ideal ledger. So now we could attempt to replace this global ideal ledger in our security statement. And please note that here the replacement does not only happen in the real world, but it also happens in the ideal world, right? So in particular in the ideal world, the interface of the simulator at the ideal setup might have changed, right? So before the simulator was interfacing with the G ledger functionality, now this connection is essentially gone and replaced by a new interface, adversarial interface of the Bitcoin protocol. Now I told you before that the realization of G ledger, the Bitcoin blockchain in this example is at least as secure as the ideal ledger. This means that there are less attacks on Bitcoin than on the ledger. Essentially this interface might simply shrink somehow and our simulator might run into failure. So actually this is a problem, depending on how extensively our simulator has used the adversarial interface of G ledger and also depending on how large the difference is between security wise between Bitcoin and G ledger, we cannot do the replacement, right? And normally ideal setups are really abstractions of the primitive and differ quite a lot from the protocols realizing them. Take the ledger for example, an ideal ledger lets the adversary fully determine the order of transactions in each block, but nobody so far has found an attack on Bitcoin that achieves arbitrary reordering when the majority of mining resources is honest. So if you carefully watched two slides ago, you might have spotted something that I ignored so far. Essentially, when we replace the global setup, the adversary in the real world gets restricted as well. Unfortunately, this does not really help us because when we replace a setup, we want to preserve the original statement, meaning that we want our old simulator to work for the replaced setting as well. In particular, we do not want to verify whether the restricted simulator works for the restricted adversary. So otherwise we might as well do the proof from scratch. Okay, that was quite bad news regarding replacement of global setups. So does it mean that we cannot replace them at all? It turns out we can replace them. In 2014, Kennedy et al. showed that a global setup can be replaced by a protocol that is equivalent to it. Equivalent here means that it is, well, equally secure instead of what we had before, at least as secure. And the intuition is kind of obvious. Equivalence implies that the adversarial interface does not change. And hence, the simulator cannot run into failure and still works for the replaced setting. Okay, but this result might seem a bit useless in the beginning. Why would we want to replace a setup by an equivalent setup? But it actually has its merits. If you have proven your protocol with respect to a global setup, you can use this composition theorem of Kennedy et al. to securely deploy your protocol on the internet and simply let a trusted authority run a code that is functionally and security-wise equivalent to a global setup. And in practice, this could be, for example, a centralized PKI service offered by a trusted cloud provider. It could be a state lottery that is live streamed for people to watch or a government running a bulletin board. Okay, so far so good. There is one notable example where such equivalence replacements has essentially no merit. And this is an example of a global setup. And in practice, this could be, for example, and this is essentially blockchains. As you might know, we by now have countless blockchain protocols providing us with publicly or privately accessible, immutable transaction ledgers. And even beyond my not-so-serious example of online poker before, there are many serious applications for this new type of data structure. If you check the literature, all of these have a security analysis in a simulation-based framework carried out with respect to a global ideal-ledger functionality. Now, the whole point of the blockchain era is to not implement the ideal-ledger by a trusted authority, but essentially by decentralize an interactive protocol emulating such central trust. If we want any proven security guarantee of these applications to be meaningful in practice, we need to be able to replace the global ideal-ledger by one of these blockchains, which are all not equivalent to the ideal-ledger so we cannot use equivalent replacement of the previous slide. So in our paper, we investigate when global replacement by non-equivalent protocols is possible. I hope that from what I showed you before, the intuition is now quite clear. The replacement works essentially when the simulator is not disturbed by it. We give two new composition theorems that allow to replace global setups by stronger protocols. The first one works for what we call agnostic simulations. This is essentially a simulator who only accesses the ledger as an honest party would do. This means the simulator of the security proof for the poker protocol, for example, does not make use of the adversarial interface of the ledger. It essentially just submits transactions on behalf of Alice and Carol. For global setups which do not provide any extra adversarial interface, this theorem yields black box replacement, meaning that we do not even have to look into the simulator here of the underlying statement. Otherwise, the theorem is non-black box and it really depends on the simulator here of our poker security proof whether we can replace the global ledger or not. Our second composition theorem can capture an even broader setting, but its preconditions are more tedious to check. We define a set of so-called admissible attacks, which are attacks that are allowed on both the global ledger and Bitcoin. For example, it might be possible for a network adversary to postpone the operation of an honest transaction in Bitcoin by one block, and hence we can allow such postponing also to our simulator. When we replace the global ledger functionality by Bitcoin, the simulator is still able to conduct this attack and the simulation does not fail at this point. So in some sense, we need to identify attacks where the global ledger functionality and Bitcoin do not differ significantly, and then we can essentially unlock all those attacks and allow our poker simulator to use these attacks in the simulation as well. So this concludes essentially the presentation of our results. I do want to point out that while conducting this analysis, we had to spend a considerable amount of time looking into global setups, and our paper essentially provides lots of insights into these types of setups, even beyond replacement. So I want to give you one example of this. Assume that you invented a blockchain based time stamping protocol and you want to demonstrate its security. So you would of course model security with respect to a global ledger, because you want to be more realistic in your analysis, and then you attempt to realize, say this ideal time stamping functionality F. On the slide, I only show the ideal setting here that your real time stamping protocol needs to be distinguished from or need to be indistinguishable from. So obviously, secure time stamping needs to preserve the order of things, and you ensure this in this functionality as F time stamping that you write by essentially not letting the adversary reorder any entries. However there are many ideal ledger functionalities in the literature which let the adversary reorder transactions well in a limited fashion. So let's say at least, well within the last or the current block. So the simulation could actually, so your simulation here of your security proof of your time stamping protocol, it could essentially exploit this weakness of the global ledger. This means that such a security notion F time stamp here combined with the global G ledger it can potentially be realized by a very insecure time stamping protocol, where it's not really time stamping if you can shift the order of things. So we learn from that that with global setups we need to always look at both F and G here to read security guarantees. So this simple example shows that looking only at F might completely blur the security guarantees of the protocol under consideration. Let me also stress that while I presented the results mostly in terms of examples such as the global ledger the paper provides of course general theorems that apply to any kind of global setups. For writing down and proving the theorems we also had to fix the simulation based model and we chose the UC model of Kennedy which is well at the point where we wrote the paper seemed to be simply the best developed or furthest developed model regarding the formalization of global setups. Our paper kind of implicitly answers a question that is often asked by researchers what is a good global setup to use and essentially we think that a good global setup is one that has a well let's say quite realistic adversarial interface such that our composition theorems can be applied to replace it. Other types of global setups can still be used but the proven statements have well essentially more a theoretical value. Okay so I hope you gain some insights in how to use and also how to not use global setups in your security proofs. Please have a look at the full paper which is equipped with lots of examples and it is actually not as hard to read as you probably imagine it to be. If you have any questions please write us an email and thank you for watching.