 So, I'll be talking today about, so I'm Peter Gashi for those who don't know me, and I'll be talking today about Uroboros and Uroboros-Praos, which are the two proof-of-stake blockchain protocols that you probably have already heard about. Uroboros is the protocol that is underlying the Cardano blockchain currently, and Uroboros-Praos is an improved version of it. A lot of them are separate papers that have already been accepted to conferences. First I should mention that those Uroboros blockchains are joined work with Agelos, Alex Russell from Connecticat, Bernardo, and Roman, and these guys are actually authors of the first Uroboros paper, and I joined the team when we started working on Praos. So, a bit of a sketch where we are going today. I will first, just to make sure we are on the same page, I will start talking about Bitcoin and proof-of-work, because the deficiencies of the proof-of-work approach are actually what motivates proof-of-stake, and so I will sketch why proof-of-work is probably not the best way to go and why we should turn to proof-of-stake, and what is the actual general pattern behind proof-of-stake. And then I will spend most of the time discussing Uroboros, so I will try to give you a high-level overview of how the protocol works, and what model we are analyzing it, and also sketch some parts of how we argue about the security, of course, not everything, but just to give you a gist of how this is done, because as Phil said in every talk, there should be a part that is only understood by three people in the room, so probably this will be here. And then I will, from this, try to describe what are the differences when we move from Uroboros to Praos, and not only what is the difference in the protocol, but also what is the difference in the goals that we achieve, so why is Praos better? Okay, so that's the plan. So let's start with Bitcoin. I will go through this quickly, because I believe most of you know how Bitcoin works, but just to be on the same page, we all know that this is an electronic currency that was rolled out in 2009 actually, but proposed earlier, and what makes it novel is that it provides a decentralized mechanism to maintain a ledger of transactions, and how it does it is by including this novel security assumption that the adversarial computing power is dominated by the power of the honest participants. This is a new assumption that was not before considered, and allows Bitcoin to enter this scene and realize something that previously was not known how to do. Of course, this breakthrough induced a lot of follow-up work. There is a myriad of new cryptocurrencies coming to exist, with richer transactions, with better privacy, with different consensus mechanisms, and this is actually the part we will be looking at. So there are several or many ways in which you can improve Bitcoin, but we will be focusing on how to improve the consensus mechanism. But the first question that a theory person should ask about Bitcoin is, what is the problem that Bitcoin actually solves? And one can see that the problem is actually if you have a distributed collection of parties, how can they agree on this dynamically updated common sequence of transactions, this ledger? And this was captured formally in the GKL work, GKL 15. And the two properties that we expect from a ledger are persistence, which intuitively means that every transaction that has been included into the ledger will stay there and will stay there in the same position where it was included. So the ledger is immutable in some sense. And liveness, which means that if I, as a participant in the protocol, want to include a transaction into the ledger, it will eventually get there, which is of course somehow parameterized. In such a ledger, we want to build in an environment where parties may come and go. This is also often called permissionless setting, where anyone can join around the protocol and then leave. And of course, in the face of a potential adversary that might actively try to disrupt the system. And the way Bitcoin does this, of course, as we all know, is by keeping this ledger in a structure called a blockchain, which consists of a genesis block, which is an agreed upon starting point of the blockchain that all parties are aware of. And then a sequence of blocks, which actually contain the contents of this ledger, so these transactions. But on top of them, each block also contains a hash of the previous block and therefore commits to the entire history of the ledger. And if you look at it like this, the main challenge that you have to somehow figure out when you are trying to design such a protocol is how these parties could agree on addition of new blocks. So how to achieve consensus on changes of the state of this ledger. And the reason this is an intriguing problem is because it's outside of the classical distributed computing models. Because talking about the majority of players is not useful in this setting when anyone can join and anyone can leave. This is the well-known Sibyl attack problem where the attacker could just spawn a lot of identities and overload the set of parties and formally achieve majority in any counting that only depends on identities of the players. And the way Bitcoin solves this, again, just to have common ground to compare them. So the way Bitcoin achieves solves this problem is by combining two things. The first one is proof of work. So every block contains work tons that has the property that when you hash the whole block, you get a particular number of zero. So this certifies that you've invested a significant amount of work to create this block. And then the second part of this solution is the longest chain rule, which tells you that if you are supposed to choose between many chains, you simply take the one in which the highest amount of work was invested. And these two things together allow you to allow the parties to arrive at the consensus about what is the actual state of the ledger. And so the Bitcoin protocol from a high level looks like this. Basically, if you are a party, what you do is you just collect all the transactions that you see on the network. You collect all the chains that you see being gossiped around on the network. You adopt the longest one of them that is valid according to whatever rules, I mean concrete set of rules that you have for the transactions that are in the chain. And then you try to extend this chain by trying to mine a new block, by trying to solve this proof of work. And if you succeed, then you just broadcast it. And everyone else, according to the rules that I said, will adopt your new block. And the state has just been updated. And when we try to characterize what this mechanism actually achieves, it's often called eventual consensus. Why eventual? Because if we have an adversary that actually would want to attack, for example, the persistency of the transaction. So a particular transaction that has already been included into the ledger, if the attacker wants to revert this transaction, he would have to create an alternative block that doesn't contain this transaction, of course. But then to satisfy the longest chain rule, to convince everyone else that actually the yellow block is the right new state of the ledger, he would have to create a longer chain than the previous one. But of course, he's unlikely to do it if he commands minority of computational power, because all the other parties are at the same time trying to extend the blue chain. And this is what makes the transactions increasingly immutable. So there is not a strict 0-1 switch from not being included in the ledger to being included there. It's more like this eventual process, and that's why we call it eventual consensus. And another view that is useful when you are looking at Bitcoin, and that will be useful when we will be looking at Uroboros, is that you can see as a lottery where all these parties are an election process, where all these parties are competing for having the right to create the next block, and the probability that they succeed in this election or in this lottery is proportional to their computing power. So as I said, Bitcoin was an ingenious design. And to say some good things about it, it's a surprisingly simple way to solve this complicated problem of achieving a consensus with a fluid population of participants. And it sidestepped these previous impossibility results. We believe that something like that would be hard to achieve. And this is sidestepped thanks to this honest majority of computational power assumption. This is something that Bitcoin relies on. And it can be formally analyzed. So we now have a reasonably good understanding of what are the guarantees, at least in the cryptographic model of Bitcoin. There is this sequence of papers where the first one analyzed Bitcoin and a synchronous setting, then the semi-synchronous, and then there is also a UC characterization of the functionality that Bitcoin achieves. But of course, Bitcoin is not the final answer. Otherwise, we wouldn't be here. And the main problem, at least from this perspective, with Bitcoin is that it relies on this computational race that I just described. And so as Bitcoin grows, also more and more resources have to be invested into maintaining the network, into computing these proofs of work, so that the majority of honest computational power is maintained. And so that this lottery that I described can carry on. This can be quantified. I'm not sure how precise these estimates are, but it seems that the consumption of energy of the Bitcoin network is on the order of units of gigawatts, which is roughly about a million of US homes and how much an entire country of, say, Iceland consumes. And what is even more worrying about this is that also these estimates confirm what theory says that as Bitcoin is growing, so is this consumption growing. So it seems that these estimates did actually triple over the last six months. The estimates of the energy that is burned by Bitcoin. So this clearly doesn't seem to be sustainable. So there is an obvious challenge for the theoreticians and practitioners to try to replace this proof of work lottery with some alternative resource lottery. And there have been several ideas in the past, coming from the Bitcoin community, from the academic community, and so on. Some ideas were to replace the resource that is used in this lottery, the computational power, by something else, for example, disk space, or maybe at least make the computation useful or use some useful storage as the resource that is being used to back these elections that I described. But from my perspective and from the perspective of many others, actually the ultimate resource that could be used to replace proof of work is a virtual or abstract resource, one that doesn't actually cost us any real physical resources. And an ideal option for that is the coin itself, because the blockchain already contains the balances of all the participants. So it contains an accounting of how these resources is distributed. So this is a perfect resource to be used as the underlying resource for these elections. And this is the central idea of proof of stake. So this is the intuition, but it turns out that when you actually want to implement this, it gets, not so surprisingly, more complicated than that. And the devil is actually in the details, as usually. So I will try to guide you through what are the main challenges when you want to design a proof of stake protocol and how Uroboros deals with them. So the idea, as I said, is to transition in this election process to virtual resources. And so the first strongman proposal could be, well, if we want to add a new block, let's just select one particular single unit of this currency, like the smallest unit. We select it uniformly at random. And then we just look who owns this unit. And this will be the guy that is allowed to create the next block. This was originally called Follow the Satoshi, because Satoshi is the smallest unit in Bitcoin. This results into the participants being elected proportionally to their stake. So that looks awesome. We just got rid of the physical resources, and we should be done. The difficulty here is that if we want to sample from this distribution, we actually need randomness. We need some randomness that is unbiased and that all the parties in the protocol agree on. It's really important that the adversary doesn't have the power to bias this randomness in a significant way, because then he could basically bias these elections, make him the winner of the elections, and basically hijack the whole chain. And if people were thinking about how to get this randomness, or yes, the first idea that basically comes to mind is just use the blockchain itself. We already have some actually reasonable amount of randomly looking data in the blocks of the blockchain. So maybe we could just hash the blockchain. This will give us this concrete coin. We will look who owns the coin, and this will be the person that is allowed to create a new block. But this actually turns out it doesn't work. So I will just sketch why it doesn't work so that you can appreciate that actually one has to look at the details quite carefully. So the problem is, of course, something that is called rejection sampling. So what the adversary can do, if it's his turn already to create a block, so imagine he was lucky once. He won the lottery. He can create a block. He just tries to create a block. He does the hashing himself privately. He looks at what is the outcome. And if it's not him who is the winner for the next step, he just drops the block and doesn't tell anyone about it and tries to create a different block. And he repeats this process until he gets lucky. And he turns out to be the winner for the next round as well. And at this point, he publishes this block. And this gives him the right to create the next block as well. And he can just repeat this process and hijack the chain forever. So rejection sampling, or this is also sometimes called the grinding attack. There are several variants of it. But this is like a simplified cartoon of a particular grinding attack. This represents a problem to these naive approaches that I just described for deriving the randomness for the election process. There are several proof-of-stake proposals that give rigorous guarantees about their security and also, therefore, in particular, about the process, how they derive the randomness. One is Uroboros that I will discuss in detail. And now I will just describe how they deal with this randomness problem in very, very general terms. So what Uroboros does is it implements a secure multi-party computation on top of the blockchain using the blockchain as a communication medium. And the outcome of this computation is clean randomness that can be used to sample further leaders or further winners of these elections. And this randomness is unbiased as long as the majority of the participants in this NPC is honest. I will go into details later in the talk. Then there is another class of protocols, which includes Snow White and Uroboros-Prowse, which I will also talk about. These protocols approach this problem differently. They do use hashing, but in a much more careful way. And they also come with a formal analysis that show that this hashing, this grinding problem or rejection sampling problem can be contained and cannot harm the security properties of the protocol. So I will briefly talk about that as well. And then another different class of protocols or a particular single protocol that also is a proof-of-stake protocol with rigorous guarantees coming from academia is Algorand. This one has a very different approach. It aims to achieve a complete consensus for every block. And so there is a several round protocol running for every block. But why I mention it here is that it also needs randomness for a similar reason as the protocols that I described to you. And it also uses hashing and also needs to provide an analysis why grinding attacks are not a problem that couldn't be contained. So this is like a common theme for all these protocols that we need to get reasonably clean randomness to be able to sample future winners of these elections. And there are also a lot of proof-of-stake solutions in the wild being implemented. I just listed some of them. But what is interesting about, so these usually come without any provable analysis, but what is interesting is that as far as I'm aware, the only intersection between proof-of-stake protocols that do have provably provable security guarantees and proof-of-stake protocols that are implemented, the only element in this intersection set is Uroboros in Cardano as far as I know. So let me talk about that protocol in greater detail now. Really a high-level view. So Uroboros is analyzing a model where we assume synchronous time and communication. I will detail what we mean by that. And it provides persistence and liveness, the two properties that I described to you if the three assumptions below are satisfied. So if the adversary has minority of stake throughout the whole execution, if the adversary is subject to a corruption delay, so the adversary cannot corrupt participants immediately, it takes some time between when he decides that I would like to corrupt this party and the time when the party actually gets corrupted and the stake shifts are happening at a bounded rate. So the stake is not shifting too quickly, for example, from one's parties to adversarial parties. So the communication model in slightly greater detail, how the model in which we analyze Uroboros or in which Uroboros is analyzed is synchronous, which means participants have synchronized watches. The time is divided into slots. In the implementation, the slot is actually 20 seconds. And any message that is being sent by an honest player is assumed to arrive to all other honest players within the same slot. And by the description of the protocol, all honest communication is happening via broadcast. Of course, in practice, this broadcast has to be implemented somehow by peer-to-peer gossiping. But the protocol basically assumes that all messages of the honest parties are spread by broadcast, but of course the adversarial parties can send arbitrary messages to arbitrary parties at arbitrary times and so on. So this is the communication model and let me now sketch how Uroboros actually works. So the time that consists of these slots, 20-second slots that follow one another is actually split into bigger time intervals called epochs. So an epoch is a sequence of R slots. You can see, so each of these colored boxes is a separate epoch. And what the protocol does is that actually for each of these epochs, for each of the slots in this epoch, it samples slot leader, who will be the party that is allowed to, the only party, the unique party that is allowed to create a block in that particular slot. And everyone will be aware that it's only this party that is able to create this block and then therefore the parties will require that these blocks are signed by these slot leaders. And how to choose these slot leaders? That's an instance of this lottery that I have just described you. So we need two things for that. We need a stake distribution from which we will be sampling. So actually when we want to decide who will be the slot leaders for this yellow epoch, what we do is we take a stake distribution from a particular point in this blue epoch. So from one in which there is an agreement on what exactly is the stake distribution at a particular point, this is fixed and will be used for sampling these slot leaders throughout the whole yellow epoch. And except for the distribution that we are sampling from, we also need randomness to do the sampling, right? Because everyone has to agree on how the sampling ended up. And so the randomness for sampling slot leaders for this yellow epoch will come from the multi-party computation that will be around in this blue epoch. So basically the blue epoch gives us both the stake distribution to sample from and the randomness to be used for the sampling. And this allows us to sample a complete leader schedule which is just a sequence of leaders for each of the slots of the yellow epoch. So this is in broad terms how the protocol works. And if we want to analyze the protocol, actually in the talk I will take the approach that is already, that is also taken in the paper. We first look at a simpler case where we analyze one single epoch only and then we somehow bootstrap the analysis to cover several epochs or the full protocol as I described it here. So if we look at single epoch, this is what we call the static case. Then this case looks as follows. The stake is static because as I told you, the stake distribution that is used for sampling all the slot leaders in a particular epoch is fixed from the previous epoch. So this static setting looks as follows. We have a fixed static stake distribution. We have an ideal randomness. Both of them are considered to be known by everyone and you can imagine them being written in a particular Genesis block, Genesis block, together with the public keys of the players. And then we just run the protocol for this R slots which correspond to one epoch. So basically we start with a fixed stake distribution and the fixed randomness and we look at how the protocol goes. So let's take a look at that, how the protocol will go in this one particular epoch. It will first determine the slot leaders. This is done by a fixed function L that everyone is aware of. And this function simply takes the randomness that was included in the Genesis block and sample these individual slot leaders for each of the slots in such a way that each slot leader is sampled independently and proportionally to his relative stake. So if you own 1% of the coins, you have a 1% chance of being the slot leader. And since the function L is public, this whole, and the randomness is public as well because it comes from the block chain or from the Genesis block actually in this static case, the leader schedule is public as well. So everyone knows who is a leader in which slot. Then we call a particular chain valid in this setting. If it starts with this Genesis block and this block is followed by a sequence of other blocks that are associated with increasing slot numbers. So you can have only at most one block per slot. And of course, if these blocks maintain some consistency in the sense that the transactions included are not conflicting and each of the blocks needs to be signed by the respective slot leader from the leader schedule that I already described to you how it is sampled. So it's important to note that we don't need to have a block in each of the slots but we have at most one block in each slot. And this is how a block looks in very broad terms. Like conceptually, it just contains the transactions, a commitment to the previous block in the form of the hash and slot number. And it has to be signed by a leader that was elected for the slot with that particular slot number. And then the protocol is actually very similar to Bitcoin and that's on purpose. So how it works on a high level is that it just collects all the transactions from the network. It collects again all the blockchains that are being broadcasted, keeps only on the valid ones and maintains the longest one as its current state. And if the participant finds out that he's a leader according to the leader schedule, he will add a new block for that particular slot in which he's the leader and broadcast it to a sign it of course, that's important and broadcast it to everyone. So we're noticing now that this is a longest chain rule protocol just like Bitcoin and aims for an eventual consensus. This is something that was already clear from what I said but it's good to realize that. But there are important differences compared to Bitcoin, namely the adversary who controls a particular subset of the parties that are a minority by assumption. So the adversary has more power than in the Bitcoin case for several reasons. So first, it knows the entire sequence of the leaders ahead of time. This is because the leader schedule is public, as I said. And second, it can generate multiple blocks for a particular slot without any effort, right? Because generating a block, if you are a slot leader, this just costs you one signature. This is very different from Bitcoin where you actually have to invest a lot of work to create the proof of work. So the adversary has more capabilities here and we need to understand whether this actually significantly improves his power. And the proof in the paper actually shows that despite having this greater power, the adversary is not able to violate the properties that we care about. So persistence and liveness of the ledger that is realized by this protocol. I will now briefly go into how this is proven. And for that, we need to introduce some notions. So an important notion is a characteristic string. This is a binary string that consists of zeros and ones. It's length is r, that's the number of slots. And basically, zero in this slot means that the leader that was elected for the slot on that position is honest and one means that the leader is adversarial. So this is just a binary string that is binomially distributed. Zeros are assumed to be more likely than ones to appear because we have this assumption that there is an honest majority of stake. And the obvious hope is that since the probability is skewed towards zeros, we will also see more zeros than ones. And therefore, maybe this is enough to guarantee good properties of how the execution will be happening. And this is something that needs to be analyzed, of course. And if we look at what we want to prove about the protocol, it's of course, in the long run, we want to prove persistence and liveness of the ledger that is being stored in this blockchain. But it's known from previous work that there are actually three more basic properties that do imply persistent and liveness. It's common prefix, chain quality and chain growth. Where common prefix means that any two chains that were possessed by honest parties at some point have the property that if you remove K blocks from one of them, you will get a prefix of the other one. So basically, any two chains do not differ by more than in any other place than the last K blocks of it, of them. Then chain quality is a property that says that if a chain is possessed by an honest party, any chain that is possessed by an honest party in the last K blocks contains actually a block that was honestly generated. This is important in the analysis. And chain growth is what you would probably expect. It's just a guarantee that during the run of the protocol, there will be some rate at which the chain will be growing. The most difficult to prove is actually the common prefix property, and that's also the one I will briefly talk about or go slightly more into detail now. But first, let's try to get some intuition about how it looks when this protocol is executed. So let's try to look at an example run. So here we have the Genesis block, and then here we have a particular epoch of length nine. This is the characteristic string. So zeros mean that the slot leader is honest. One means that the slot leader is adversarial. And so what will be happening? The first slot leader simply is expected to create a block, so he does. And the pointer to the previous block will be simply the pointer to the Genesis block because this is the only block that the party knows about. But then it's the turn of the adversarial party, and what that party can do, it can pretend that it hasn't seen the block from the honest party or just ignores it and creates them like an alternative block that is connected to the Genesis block. Then it's the honest party again. And to be on the safe side in the analysis, what we assume is that if, so we know that by the protocol, the honest party needs to extend the longest chain it has seen so far. But if it sees several longest chains, as in this case, we leave it to the adversary to decide which of them the honest party will actually extend. So for example, maybe now the adversary decides that the honest party wants to extend this chain, then it's adversarial parties turn again, then it's an honest party's turn. Again, the adversary decides which of the two equally long chains it will extend. Now what the adversary can do, since it is cheap for him to create blocks, he can create, retroactively create blocks in the past slots that have already passed and show this to the honest player that is currently the slot leader and to trick him into extending this new additional chain, this can happen. Then it's an adversarial turn again. He might choose not to create a block at all. In the next one, he might, for example, create a block like this and this might be the outcome of the run, for example. So this just shows that even in this simple setting, when we have a fixed stake and we have the clean randomness assumed in the Genesis block, the dynamics is non-trivial and these different parts in this graph, in this tree, basically correspond to different histories. So there needs to be, what we need to prove is that there will be an eventual consensus on which of the parts is the correct one. And it cannot happen that we have several competing, competing histories, which means competing parts of the same length for a long time. This is something that we need to exclude. Otherwise, consensus would not be achieved. And there is a particular calculus or framework that is introduced in the paper to formally prove this. The central notion of that is a fork. So a fork is a graph like this, a tree, which is an abstraction of which block builds on top of which other blocks during the execution of the protocol. So each execution leads to a particular fork. So it's a graph, as you would expect, the nodes correspond to blocks and the edges correspond to this predecessor relation. The root of the tree is the genesis block. On this block are these double-circuit nodes and the adversarial blocks are this single-circuit. Nodes are labeled with their slot numbers and each node has a unique edge to a node with a smaller label, of course. What is important is that we know that according to the protocol, according to the network assumption, all players hear all honest broadcasts and honest players speak at most once in every slot because they follow the protocol. And this implies that any honest slot is associated with exactly one honest node. So there is at most one node per slot and the depth of any honest block exceeds that of all previous honest blocks because the party creating it has already heard about all other previous honest blocks thanks to the broadcast. And so formally a fork for a characteristic string, so for a binary string is labeled the root of three where each node is labeled with an element indicating the slot and root is labeled with zero. Edges are directed the way from the root. Labels increase along the paths and honest slots label a unique vertex and honest depths increase. So these are the conditions that we can extract from how the protocol runs and how the model, the communication model, what it guarantees us. And now we are going to look at all these possible forks. So for a particular characteristic string, we look at the set of all possible forks, all possible graphs that fulfill all these properties. And we want to argue that there is none that would have a bad property. So let's take a look what a bad property would mean. So again, if we want to prove common prefix, which is our goal now, we want to argue that for a characteristic string that is being sampled according to this binomial distribution that I told you about, it is unlikely that the adversary can violate common prefix, this is the goal. But the way how we get there is actually we look at the set of all forks, of all these graphs that correspond to a particular characteristic string. And we argue that, as I said, with high probability over sampling this characteristic string, we will get a string that does not permit any fork that will have a bad property, where bad property is basically widely diverging paths because this corresponds to two variants of history that are different for a long time. This is something we don't want. And actually we can focus on the most extreme case where the fork has two disjoint paths that have the same maximum depth. And well, they are disjoint, which means they split from each other already at the beginning. I will later tell you why we can only focus on this, but for now, let's believe it. And then this allows us to define that the string, like this one, is forkable if it allows a fork that does have this bad property, that does have two disjoint paths of maximum depth. So here is an example string, three zeros, three ones, three zeros. So the adversary only controls one third of the slots in this string. But nonetheless, it turns out that this string is actually forkable. Let's take a look why that's the case. So first it's the turn of the honest parties, and what they do is they create their three blocks according to the protocol. Then it's the adversarial turn, and the adversary is free to do whatever he wishes. So he just creates a second path, independent of the first one. Then he decides which one of these two paths should be extended by the honest party because they are of the equal length. So he decides to extend the upper path, and what he can then do is expose, create three blocks in his own slots on the lower path, and we end up with two paths that are of the same length, and they're of length six. They're maximum, this is the maximum length within the fork, and they are disjoint. The only intersection is the genesis block, and this is exactly what we don't want to end up with, and so we call such strings forkable, and we want to avoid forkable strings. Okay, something just happened. Sorry, it seems that the network was done, and the slides are online, which is probably not a good idea. Okay, sorry. Right, so we just observed that this string, even though it consists of just one third of adversarial slots, is forkable, and actually it's not so difficult to observe that no string of density less than one third is forkable, so this is like a corner case, but on the other hand, naturally all strings of density more than one half are forkable because if the adversary controls the majority of the slots, he can just ignore what the honest parties do and just create his own chain on the side, and this will create a fork, like make the string forkable, so it would be easy to argue about characteristic strings that have at most one third adversarial slots, but we want to resilience against adversaries that go up to one half of a stake, and this turns out to be much more tricky, so in particular, we need to argue about the probability distribution of the characteristic strings, so we know that even a string with one third of adversarial slots can be forkable, so we need to argue that it's very unlikely that we will end up with these strings in this interesting region between one third and one half, okay? And this is something that is actually done. In the original paper, there is a bound which is improved by Alex Russell in a separate paper which shows that if we choose this characteristic string according to this binomial distribution, then the probability that the string is forkable decreases exponentially with the length of the string, and this together with a not so difficult reduction showing that if the string permits violation of the common prefix, then there must be a forkable substring of the particular length of this common prefix violation. This is just the observation that says that, well, if you have run of the protocol where you have a split of the common prefix at some point, then at this point, you can just look at the characteristic string and this will be a forkable string. So together with this reduction, we end up at the theorem that we want which shows that common prefix violation is probability decreases exponentially with the length of the number of the slots. Let me see, okay. Right, so I will briefly talk about how this is proven. I will not go too much into the details, but it's proven by a martingale argument, so we talk about a particular feature that characterizes how these forks develop as we are adding more and more slots at the end, and the feature that we will be looking at is a two-dimensional quantity reflecting the parameters of the best and the second best path. So we will always be looking at the longest path in this fork and the second longest one, basically, and what we will do, we will be looking at whether the adversary is able if he would invest all his power, meaning all the slots that he has under his control, whether he would be able to extend the second longest chain that is joined from the main longest chain sufficiently to make it as long as the longest chain, and that's bad because that's exactly what the forkable string is, and it turns out that these quantities can be followed throughout the characteristic string in an inductive way and can be described by, this process can be described by a martingale, and then the probability that this is true, that the second longest path can actually be extended to the longest path can be upper bounded by an exponentially decreasing quantity by analyzing this martingale and applying azumas inequality, so I think I will skip this analysis. Let me just say that chain growth and chain quality are then easier to establish compared to common prefix. Chain growth in particular is very easy because whenever there is an honest party's turn, thanks to the synchronicity assumption, we know that the honest party is aware already of the deepest previous honest block, and so it will only create a block that has the depth at least higher by one, and so the chain always grows with an honest slot, and chain quality can also be reduced to common prefix in this particular case, and this then allows us to derive persistence and liveness as well. So this would be for the static protocol, for just the case where we have a static state distribution and we have a randomness that has fallen from the sky and we know it's secure, but now we need to move to a setting where we have several consecutive epochs, as I described you, where the state distribution is changing between the epochs and the randomness has to be regenerated again. So as I told you, the state distribution will be just taken from the previous epoch, just a snapshot in a particular time, but the randomness, we need to get it somewhere, and as I already said, the idea is that the protocol uses a secure MPC to generate it, and the blockchain itself is used as a medium of communication for running this MPC protocol that will give us the clean randomness, just one slide or two slide overview of how this MPC works. So it uses a primitive called publicly verifiable secret sharing, which is a protocol for a dealer and a family of players, where the dealer basically chooses a value S and produces shares of this value, and the protocol has this property. The players, well, then distributes these shares to the players, and the players can check that these values are valid, so they do reflect a consistent value. There is a value that the dealer has distributed to all of them, and if the majority of the players decide, they can reconstruct this value together from their shares, but the minority of the players together can learn nothing about the secret value, and so if we want to use this publicly verifiable secret sharing to create the randomness for the next epoch, which is called coin tossing protocol, what we do is simply every party generates a random string, then it shares this random string to all the other parties in the MPC, and also for efficiency reasons, it also commits to the string and posts the commitment on the blockchain, and then once this is done, all the parties can just open the commitments and see what the other parties committed to, what was the randomness, and they just XOR it, and this gives us a random string, but if some party does not open its commitment, then the other parties, the majority of, well, which are the honest parties, can just use the PVSS shares to recover the value that the party has committed to, and so in the end all the, like these values will be XORed together, and this gives us the randomness for the next epoch, and we have the guarantees that this randomness is unbiased under the assumption that there is honest majority among the participants in this MPC protocol. As for practicality, I guess I don't have to tell you that Uroboros was implemented by OHK, but from a theory perspective, so about the practical performance, one can say a lot, and I will not capture that in these slides, but from the theoretical perspective, actually you can do an analysis of how it compares to Bitcoin in terms of what is the probability or how long do you have to wait until you can be 99.9% sure that the adversary cannot run a double-spending attack against you, and this number, this time, is of course parametrized by the power of the adversary. So for example, if we have an adversary that controls 10% of hashing power in Bitcoin, then you need to wait 50 minutes to be sure that your, to have 99.9% certainty that your transaction will not be removed from the ledger by a double-spending attack. In Uroboros, for an adversary controlling 10% of the stake, because we shift from computational power to stake, you have to wait five minutes and so on, right? These are different values for different, for different adversarial power, and there is also this additional column, which is called Uroboros covered, which captures how long you want to wait. If you are willing to assume that the adversary in trying to create this double-spending, to run this double-spending attack, is not willing to disclose himself. So it's not willing to make any actions that would make it possible to identify that particular party as an attacker. For example, or more specifically, the attacker is not willing to sign two blocks for the same slot, because this is obviously adversarial behavior, since the protocol tells you to only sign a single block. And so of course, well, of course, as you can see from the table, the comparison is very much in favor of Uroboros in compared to Bitcoin, and of course, you get even better guarantees if you assume covered adversaries, yes? So this is assuming 22nd block times? Yes, yes. So that means it's 15? Yes, yeah, exactly. Okay, so that would be about Uroboros if there are any other questions. Maybe now it's a good time, and if not, then I would quickly try to sketch how prowess differs from Uroboros, yes? Yes, so for Uroboros, you can actually, you know the optimal strategy that follows from the combinatorial analysis, you know what is the optimal way, given the slot schedule, the leader schedule, what you should do as the attacker to get double spending. For Bitcoin, this is just a particular attack, it might not be optimal, but that only makes the comparison more in favor of Uroboros, right? Because here we are looking at the best attack, here we are looking at a particular concrete attack that seems reasonable. So in the Bitcoin case, it's just what you would expect, right? If you want to double spend a transaction, you just create a fork and you work on that fork and you hope you will be faster. Okay, so let's, yeah. So every party creates a commitment with randomness and also the shares for the PVSS, yes? Any other questions? Okay, right, so let's look at prowess now. Yes, in a nutshell, because we are already over time, but we also started a bit late. So really in a nutshell, what prowess does is that it improves Uroboros to achieve security in a semi-synchronous communication model. I will say a bit more in detail what that means. And despite fully adaptive corruption, so these are two goals that are achieved by it. And the tools that we use to achieve these goals are first local and private leader selection, then forward secure or key evolving signatures, and then we are moving from the NPC to hashing for getting randomness. So randomness will not be completely clean as before with Uroboros, but we give an analysis why this is fine. So let me now spend one slide on both of these goals and the achievements of prowess and all three of these tools that we use to achieve them. So you can see these goals as basically strengthening of the adversarial model, and we strengthen the adversary in two different ways. The first one is that we only assume semi-synchronous communication. So this is a slide that was describing the communication model for Uroboros, but the red stuff highlights the changes when we move to prowess. So we still have slots and synchronized watches, but any message that is sent by an honest player is now delivered to honest players only within at most delta slots, so there is no guarantee of the synchronous communication, and the adversary has complete control over these delays. So as long as the message is not delayed by more than delta slots, the adversary can decide when the message arrives and in which order it might arrive to different parties at different times and so on. And this delta value is not known to the protocol. So this is the semi-synchronous model, and the second strength, and we show that the security of prowess degrades gracefully with increasing delta. Of course, you cannot have the same security guarantee as then delta increases arbitrarily, but we describe this process. This is a difference to Uroboros where we basically, if the synchronous assumption was gone, then basically you had no guarantees. And the second strengthening of the adversary that we allow is that now the adversary can corrupt any party immediately, so he can just point at the party, and from that point on, he knows the secret state of the party, he can act on its behalf, and so on. The only restriction that the adversary still needs to maintain is that it controls minority of the stake throughout the execution, and the stakes has to shift. We are also assuming that the stake shifts are happening at the bounded rate, just like in Uroboros. So these are the two ways in which we strengthen the adversary, and now we do several changes to the protocol so that the protocol is actually secure against this stronger adversary, because we actually also show in the paper that the original Uroboros would not be secure in this setting, it's not difficult to see. Actually, it wouldn't be secure against any of these two strengthenings. And so the tools that we use, and this also at the same time covers the modifications that we make when we move from Uroboros to Prouse, so one of the tools is a verifiable random function. This is a cryptographic function that is like a public key equivalent to pseudo-random functions, if this helps you. Imagining it, so you can imagine it as two algorithms, evaluate and verify, where the evaluate requires a secret key, and then if the evaluate parameterized by the secret key is applied to some input, it produces an output and the proof, where the output cannot be obtained without the secret key, it is unique, and it looks randomly to someone who doesn't know the secret key, but the proof allows the owner of the secret key to prove to everyone else that this is actually the correct output for that input, so that's what the verify procedure is for, so anyone with a public key can verify that for a particular input, output and proof, actually the proof proves that this is the correct, unique output for this input. So it basically outputs a zero or one, depending on whether it's the correct output. So this is a very useful tool, and this is used in PROWS for leader selection lottery. So instead of having this public leader schedule that I described in Uraboros, in PROWS, the leader selection is local and private, so everyone can evaluate for himself whether he's a leader for a particular slot, no one else can do it for you, and then you can convince everyone, if you turn out to be the leader, you can convince everyone that oh, for this slot I'm actually the leader and no one can dispute that, and how is that done? Basically you evaluate the verifiable random function on the randomness for this epoch and the slot number, and you basically look whether the output actually is below some threshold that depends on the stake, I will not go into how this is actually done, but the more stake you control, the more likely you are that this random value, to the random value that comes out of the VRF will be smaller than this function of your stake, and if it is, then it's easy. You can just create the block, and into the block you also input the proof that this is the VRF output, and so everyone can verify that yes, indeed you were eligible to create the block, but no one can predict it before that. And this helps us actually achieve adaptive corruption security, because every party, okay, together with some other tools that I will talk about, but this idea was previously used also in Next and Algorand, or a similar idea. It turns out that you need a special VRF for that with some additional properties, because using just a generic VRF would not be sufficient, and we describe in this paper in detail what are these properties of the VRF, and we also give an efficient realization of how that should be done. And the interesting thing is how this changes the dynamics of the protocol, is that now that everyone has a local election that tells him whether he's a slot leader, we will now suddenly have empty slots, slots where no one was actually elected to be a slot leader, and we will have slots where several honest parties might end up being slot leaders. These are called multi-liter slots, but they will be sufficiently rare so that this doesn't jeopardize the security guarantees of the protocol, but it requires this forkable string analysis that I sketched to be reworked. And actually it also has an interesting consequence that now the protocol can be parameterized by the ratio of empty slots. Basically, depending on how you choose this function phi there, this will determine how often you will have a slot that actually has a slot leader, and these short periods of slots that do not have a leader actually allow the other parties to synchronize because the messages will be delivered in the meantime. So this allows us to actually aim for much shorter slots because we don't have to assume that the whole communication is synchronized and that the message has to arrive within the slot. The second tool that is used in PROWS is our key evolving signatures. So these are special signatures where the public key is fixed, but the secret key can be evolved. So the party that knows the secret key can just make an update step where the secret key is evolved to a new one. And from the new secret key, you cannot derive the previous one. And this is what we actually do. And then it's impossible to forge old signatures with the new keys. So if you only have the new key, you cannot create signatures for the old key. Even though the verification, the public key is still remaining the same. And this is used in PROWS for signing block for the signatures that I told you about. And why is this good? It helps us with adaptive security, right? So the process is actually as follows in PROWS. If you find out via your private lottery that you are a slot leader, remember that by this time no one actually knows that you are a slot leader, only you. You create this block that also contains a proof that you are a slot leader. You sign it with your secret key and then you update your key and only then you broadcast the block. So by the time that people learn that you are a slot leader, even if you immediately get corrupted, the attacker can no longer create blocks on your behalf for this slot because your key is already updated and he cannot create signatures for this slot. And the final tool that I will only mention is that in PROWS we move away from this MPC that was used for generating randomness. And instead we use hashing, just like at the beginning, I told you about hashing, but probably you remember that there was this basic complaint of rejection sampling. And so what we need to do is to be much more careful about the analysis. And the way we prevent rejection sampling to jeopardize the security guarantees of the protocol is that we actually include single VRF output value into each of the blocks. It's included by the slot leader that created this block. And then when we want to get the randomness for the next epoch, we hash all these VRF values that were included throughout the whole epoch. And importantly, this gives us the leader schedule for the whole next epoch, not for an individual slot. And this is important because if you remember, if the combinatorial analysis tells us that if we sample the slot leaders according to the distribution, if we do one sampling, then there is negligible probability that this schedule will be bad in the sense that it will allow forking. Then even if we allow the adversary to resample this several times, but only a restricted number of times, polynomially many, but actually one can be much more precise about it. This will not allow the adversary to increase the probability that this schedule will be bad in the sense that it will allow forking. It will not allow to increase this probability sufficiently, right? Because before it was exponentially small. Now it can be increased only by a polynomial factor. And so even though the adversary can slightly bias the randomness, so the randomness is not perfect now, we can show that this doesn't harm the consensus properties of the ledger. That will be it. That was a very brief overview of PROWS and how it differs from world wars. You can just look at the papers and I'm happy to answer any questions if there are any, and thanks for your attention. Thank you.