 Professor Yvonne D'Amgol and Professor Jesper Bouz-Nielsen and I'm finishing my PhD now in the end of the year, beginning next year and I'm also working part-time with IOHK and Mario on all these cryptocurrencies protocols. So first today the goal is to present to you this proof-of-stake protocol that we've developed and the security definitions that we developed and to show you more or less how our construction works. So first of all, if I'm speaking too fast or if you don't understand something, please interrupt me. It's no problem. Please raise your hand and ask me questions. I don't mind if we stop in the middle of the presentation to answer questions. I think it's better than to leave it to the end. Now, first of all, before we start talking about the proof-of-stake protocol itself, I'd like to know how familiar you are with some concepts and then maybe tell you a little bit about these concepts before we use them in the protocol. So how many of you are familiar with commitments? Nice. Verifiable secret sharing. Nice. Cointossing. Yeah. And guaranteed output delivery for protocols. That's a more weird concept, but let's start reviewing this concept from the beginning. So first I'll start with commitments and how you use it, how you use this primitive to build something called cointossing. It was introduced in the early 80s by Manuel Blum in a paper that is actually called Cointossing over the Telephone. That's a funny, actually a fun paper to read the way he writes is interesting. So what is a commitment? It's a protocol between two parties, a sender and a receiver, and it allows the sender to commit to some information. What does it mean? The sender becomes bound to this data that it's sending. So these protocols have two phases in general. First, the commit phase. Here the sender has a message as input, message M, and he wants to tell the receiver some hint about that message without revealing the message at first. But in such a way that later he can prove to the receiver that that was the message he was talking about. So let's imagine it like a box, a closed black box where you put the message. That's my ugly drawing of a black box with a lock. Let's say he puts the message inside this lock, inside this locked box. You can't really see the message. It's a dark box. The message is hidden in there and he sends the box to the receiver. Now the receiver has the locked box. He can't really do anything with this locked box. He doesn't know what's inside because he doesn't have the key. And he can't modify really what's inside because it's a locked box. Now later in an opening phase, the sender will send him the key to the lock box. Now he can actually open this lock box and get the message. So what guarantees, what security guarantees we have here? First, in the commit phase, when the receiver gets the lock box, we have something we call hiding, which means that the receiver doesn't know what the message M is. He can't learn any information about message M. Now in the opening phase, when the receiver uses the key to open the box and get the message, we have another property called binding. What does this mean? It means that the receiver can be sure with high probability that the message M that he got out of the box is the same message M that was inside the box in the commit phase. So he can be sure that the sender didn't change the message in some way. You can think that he had the box on his side so the sender couldn't really change anything. Now you might be thinking how do you actually build this from assumptions, from cryptographic building blocks. I'll just show you an outline, a really quick outline of a very easy way to build this kind of primitive. Let's say you have a public key crypto system that's shared by the sender and receiver. They know they have a key generation algorithm and an encryption algorithm. This algorithm gives you a public key and a secret key when he gets a security parameter. Now the encryption algorithm will get randomness and a message and give you a ciphertext C. Now how can you build a simple commitment? Of course it can't be any public key crypto system but most of the ones you know like El Gamal based on DDH and a modification of RSA can be used to do this kind of construction. What do you do here? In the commit phase the sender encrypts the message under a certain randomness that he samples. He stores the randomness and the message. Let's say save randomness and message on his side on his ad and he sends to the receiver the ciphertext. Now it's a public key crypto system so here it's hiding. The receiver doesn't know the secret key. Let's say that this, I forgot the parameter here, let's say that this public key was sampled randomly by the sender and the receiver simply doesn't know the secret key so he cannot extract the message and it's hiding. Now in the opening phase the sender will send the randomness, the message and the public key. Now the receiver can compute the same operation. Let's put prime here just to say it could be something different. The receiver will compute the encryption under pk prime or prime of message and prime and this will give him c prime. Now he will check if c prime is equal to c, this c here. Now you can observe that if the sender is honest and he sends the same randomness and same message and same public key that he used in the commitment phase then c prime which is basically computing the same operation as in the commitment is going to be equal to c which is the commitment that the receiver got in the commitment phase. So that's a simple way, that's something I believe you're familiar with or public key crypto systems, that's a simple way to build it. Now you have several constructions based more or less on this paradigm of encrypting something, sending the encryption and later trying to recompute the encryption and checking that you get the same cybertext. One of the very famous results is by Tobin Pilsen in 1991. It's a famous commitment scheme that was only mentioned in a footnote in a paper about secret sharing that he wrote alone for crypto 91. But that's based on the DDH assumption, the same you use for Algamal encryption and for many other public key crypto systems such as Kramer's Shoop and so on. Easy to construct, it works basically like this, where the encryption is Algamal. And you can build this kind of protocols in many different ways with many different kinds of security with composition or just standalone. So it's a very rich, there's a very rich literature on how to build this. So I can tell you, I hope you believe me that we know how to construct this from many past results. That's something we understand well. So I'm going to use this in the POS protocol in a black box way. Just going to say I have a commitment and I use it, let's believe it works. And we can construct this in many different ways, so it's nice, yes? Yeah, POS is proof of state, just so we raise later. So let's just establish some notation here. I told you we have two phases, commitment and opening. I just want to tell you how I'm going to write commitment and opening in an easier way than writing all the boxes or public key encryption. So if the sender is sending a commitment, if the sender is sending that information to the receiver, such as the ciphertext in that toy construction, I will say he's doing commitment, R is randomness message. I'll say this gives you a atomic thing C. It could be group elements, if you do it with El Gamal. It could be a big vector field elements depending on how you construct it. But I'll just call this generically a commitment. And this is the box that I showed before. So if I give you the C, it's like I'm giving you, I'm putting the message inside the box and giving you the box. Now, the receiver gets this. When I'm opening, I will say that I send open RM to the receiver and this is sending the key. This is like sending the key to the box. Now the receiver can use this information C and the information open with the message to check that this message was indeed the one inside the box. He will use the open information as the key to open the box and check that the message is the one that the sender claims. So here we have hiding. When I give you the C, you know that you can't learn anything about the message. And here we have the binding. When I give you the opening information, you can be sure that you got the right message or you can detect that the sender was in fact cheating and he didn't give you the right message. I will use this notation because it's easier than writing the whole protocol or stuff, just saying commit and open. Now, what do I want to tell you about coin tossing? So why do we call this funny thing? Because it's literally coin tossing. This protocol allows you to get a coin, toss it over and get a random result when the coin falls. So how do you do this? There's a very simple protocol. There are many ways of doing this, some very complicated, but I'm going to show you a very simple construction based on commitments in a black box way. For this construction, I will only assume that you have a working commitment scheme. And this construction was also shown by Emmanuel Bloom on his paper coin tossing over the telephone. So this is all Bloom 81 or 82, I might be wrong. It's a very old paper, but both the notion of commitments and the coin tossing protocol were shown in the same paper. Now, how do you do coin tossing? Let's say you have Alice and Bob. Alice starts by getting randomness R. Okay, that's a bad, let's say not randomness S. She gets this, this is a random vector. I'll just say I'm sampling this randomly from a binary vector of size lambda. Just a random binary string, let's say something like that. Then Alice will send to Bob a commitment to S. Now Bob has this commitment, this box containing this random string S. It's hiding, so Bob knows nothing about S. What Bob does now is to send back to Alice some new randomness S prime of the same size, sampled randomly from the space of all binary strings of the same length as the original S. Bob just sends this in the clear, no encryption, no commitment, he just sends this vector S prime. Now Alice opens the commitment, so from here Bob retrieves S. Now what happened here? Alice has S prime, Alice has S that she sampled herself. Alice has S prime because Bob sent S prime to Alice. And now Alice opens her commitment. So Bob also gets S that Alice generated and they both have S prime and S. What do they do? They exhore S, exhore S prime and let's say they get from these a S hat. We know that the S hat that Bob gets is equal to the S hat that Alice gets because it's the same operation. S exhore S prime, right? Now why is this secure and why can we be sure that S hat is in fact random? Even if one of the parties is corrupted, even if one of the parties is trying to cheat. So here Alice sends this commitment to Bob containing S. So Bob doesn't get any information about S. He doesn't know anything about S because the commitment is hiding. Now he's going to sample another random string knowing nothing about the original S and he's going to send it back to Alice. Now let's say how could they cheat here? If Bob knew, if Bob knew the original S from Alice in the beginning, he could choose an arbitrary string, random string A such that and then choose a S prime such that S prime exhore S is equal to A and then he would force the result to be equal to A. But to do that he must know S and he doesn't know anything about S because it wasn't a commitment. Now what could Alice do? She could do the same in this last step here. If it wasn't for the commitment Alice would know S prime and then she could again choose an arbitrary string A and select a S that is different from the one that wasn't a commitment. But the commitment is binding so we know that if Alice tries to cheat and send a different S in the opening then Bob is going to detect the cheating and he can simply abort the protocol and not care about the result because he knows it's going to be cheating. Now up to here do you guys understand that we can get random strings through this protocol? And please if you have any doubts please ask me. This is going to be important later. So we can do that using commitments in a black box way. So up to here everything is black box. I don't care about how this commitment is constructed. It just has to be hiding and binding so we can use any of the many constructions of commitments to build this protocol. You can see it's a kind of simple protocol. It only has three rounds and it works. It gives you the random strings. Now let's think of a different situation. We are thinking here about an adversary, a malicious user that cheats but that always completes the protocol. So here I'm saying that Alice will send both the commitment and the opening and that Bob will always send this string S. But imagine that Alice is a bit smarter and that she wants to outsmart Bob. Let's say she wants to learn S-Hats but not tell Bob. Let's say there's a prize. Whoever gets S-Hats gets some money and Alice wants the money and she doesn't want to share it with Bob. So she wants to learn S-Hats and then tell Bob bye bye. I know S-Hats. I don't want to talk to you anymore. So Alice, if she's malicious with a board, she can abort the protocol, stop the execution right here. When she receives S-Prime she can just go away and stop running the protocol. Now what happens? She knows both S-Prime and S. She can compute S-Hats. But Bob, Bob is just hanging there. If he doesn't get the opening he doesn't know S. So he cannot compute the final output. So here we don't know, if we have an adversary that's smart like that, we don't know if we are ever going to get the output. We might never get the output from the message from Alice then we never get the output here. So what do we do? There's a whole line of research into trying to understand how we can do this better. So first I can tell you it's impossible to do it perfectly in three rounds. I can tell you you can never do this and get a completely uniform string in three rounds. If you're willing to do more rounds you can get epsilon close to uniform. For any adversary in any kind of asynchronous network then if you're willing to do many rounds. That's an old result by Cleve in 83. He showed that you cannot get fairness. What do I call fairness? Fairness here is that all the parties get their outputs in the same way that nobody can get the output but not give the output to the other one. He showed that if you want to compute an XOR we can't do this fairly with any protocol. And then we need more rounds than three. There are results now that show that if you have many rounds you can get as good as you want randomness. But still we are going to talk about a protocol that's going to run on a blockchain and it needs to run fast because we need the transactions to get in the blockchain fast. So I don't want to do many rounds because if I start doing many rounds then what guarantee do I have that the protocol is going to run fast enough for the blockchain to advance? So I want a protocol that is still three or four rounds but that has what I call guaranteed output delivery or gods. That's what people actually write in the papers. I want guaranteed output delivery which means I want to guarantee from the protocol that I will get an output. I want to know that everybody will get an output and I want to prove that that by the end of the protocol everybody gets the output. Now let's see what's happening here. We have only two parties. So if one of the parties is corrupted we have a corrupted majority. We have 50% of everybody in the protocol corrupted. The result I told you about that says that it's impossible to do this with guaranteed output delivery and fairness. It's only valid if you have this majority of corrupted people. Now how do we circumvent the impossibility results? We know in all these cryptocurrency and blockchain protocols that we assume onus majority. And in these cases of cryptocurrencies we don't have only Alice and Bob. We have many people participating in the protocol. And we assume that more than half of these people are honest. That's something we have to assume. I can also tell you that if we don't assume that more than half of the people are honest we cannot achieve consensus. I guess you've been discussing consensus protocols for Bitcoin and consensus by this Byzantine agreement protocol that lets everybody agree on a value even if you have an adversary in the middle shouting different values. There's a result that for synchronous networks, what do I mean by synchronous networks? I'll talk a bit more about that later. I mean that it's a network where I also have a guarantee that when I send you a message you get the message. So a synchronous network would be let's say the mail system. You put a letter on one side, the letter gets out on the other side on a finite amount of time. Now let's say I put the letter around the dog's neck and I kick the dog on the street and the dog runs around. I don't know if the dog is ever going to arrive at the destination. That would be a asynchronous network. In an asynchronous network the adversary is all powerful. The adversary can drop messages. The adversary can delay the delivery of messages. And then we have different lower bounds here. For synchronous networks we know that if we have health plus one, that was a bad choice. Health of the players, let's say we have N people participating in the protocol. In synchronous networks if we have health of the people plus one that are honest, this is what I call honest majority, we know that we can achieve consensus. And it's nice that we also know that we can do this with guaranteed output delivery. We can do coin tossing with guaranteed output delivery. Now in asynchronous networks the case gets much more complicated. There's a proof that it's impossible to do any consensus unless you have two thirds plus one of honest parties. It gets much more complicated. And maybe if we have another discussion, another day, I can show you a very nice proof of this that takes only actually five minutes. And it's only by drawing. It's a proof by Nancy Lynch. It's a very nice, neat proof. You can show that even if you have all crypto in the world, if you assume anything, if you assume indistinguishability obfuscation, you cannot do this. It's an information theoretical proof. It's impossible. But well, we're working on cryptocurrencies and most protocols, they always assume honest majority. So let's use this assumption of honest majority to make it easier to circumvent this impossibility results and get what we want to build. So just so you know, I'm going to start working now with the assumption that we have honest majority. Otherwise, it's much, much more difficult to construct any of these things. And you end up in the situations where you need many rounds, a lot of communication, and you don't even know if the protocol is going to finish. The question of how do you deal with asynchronous cryptographic protocols when you try to write proofs by simulation, what happens was only answered this year in crypto. We didn't know what happened with asynchronous networks for most crypto protocols because there's also proof that you can't say after how many rounds the protocol is going to end. The only thing you can say is that after, let's say, our rounds, there's a probability epsilon close to one that the protocol ends, but it's a probabilistic termination argument. I can even tell you the protocol is going to take five, six or a million rounds. So we didn't even know how to deal with this in theory, how to write proofs about these things. Now some guys came up with a compiler that turns these things into... it wraps it in UC functionalities that deal with the probabilistic termination. Super complicated, huge paper. So if we stick to the easy case, that's what everybody's doing if you've been looking at the other papers, like the paper by Garay Kiyas and Leonardo's on the Bitcoin Backbone Protocol and other papers like Zero Cash or the Hawk paper, they are all dealing with the synchronous model. Just so you know, I'm not cheating too much because we're still beginning to understand how these protocols work. So we start from the easy case, synchronous messages, standalone protocol. I'm not going to prove anything about composition. I actually know how to prove that this protocol is universally composable, but we didn't write that yet. So I think my proof is correct, but we need to write it down first and check. The nice thing is that the protocol is basically black box. So using some black magic in how to define time in the UC model for synchronizing messages and using other UC building blocks like UC commitments and so on, I think I can actually prove it composes. But today, let's talk about standalone. Just one copy of the protocol being executed without any interaction with other protocols. And that's also the setting that the GKL paper on the Bitcoin Backbone and all these other papers apart from Hawk, Hawk is proven composable actually, but all these other papers, they are assuming that we have a standalone setting with synchronous networks. The first paper that tells us anything about asynchronous networks is by Abishelat and Rafael Paz and one of their students. It's an imprint now. I think it got accepted into CCS, but they show that the Bitcoin... Ah, cool. But that's the first paper that actually manages to show that the Bitcoin protocol under certain conditions can work on asynchronous networks. But the actual technique that they used to prove that is not in the paper. They have a better idea. They had a very nice idea on how to begin with the Bitcoin protocol and bootstrap a protocol for consensus for asynchronous networks that needs certain pre-computed information. They can do this pre-computing using the Bitcoin protocol and then they jump for this other protocol that's from the 80s and then they can prove that everything works. But on that paper, they just write the theorems, it works, they give complicated arguments, but I saw a presentation where they show the actual technique that's going to be in an upcoming paper. So they do this. It's complicated. So we stay for now in synchronous standalone model. Now, I can show you... I told you that we can do this with guaranteed output delivery. But I'm just telling you, you don't have to believe me. So I can show you a very simple transformation that we can use to turn not this protocol because it only has two parties. Let's say we put Charlie. We put a third party here that I can show you how we can turn this into a protocol with guaranteed output delivery in a very simple transformation. That transformation is going to use something called verifiable secret sharing. And I can explain you really quickly what this primitive allows us to do. I'm not going to show you how to construct it because it takes a lot of time. But I'm going to show you in a very high level what we can do with verifiable secret sharing and how we can use it. If we have to take a break, I don't mind. No, I'm okay. But if you guys are tired, we can... Sure? I'll keep you raising and then... I'll prepare the next screen, let's say. Good. Should we start? Okay. Yeah. No? It's all right. I don't know. Good. So we can restart talking about verifiable secret sharing. It's a huge name, so I'll just say VSS. Now, just with a little bit of background, what is secret sharing itself without the verifiable? So secret sharing was introduced by Adi Shamir, 79 on this paper, How to Share Secrets. Some of the most cited papers, even by humanities people, they find this paper on Google. Once I looked on Google Scholar for the citations of this paper and citations to other more modern papers on secret sharing, there's a bunch of people in humanities that study stuff like the sociology of secrets, the anthropology of secrets. Then they cite these crypto papers because they don't really read the paper. They just see a paper that says secret, and they cite it in their references. It's funny. But the actual idea of this paper was to tell you, what do you do if you have a secret S, and you have a bunch of people? I'm just going to call them A, B, C, D. The secret is very important. You don't trust anyone to know the secret. You don't give the secret to one person. It's like nuclear weapons activation. You need several keys. You need two people with two different keys to turn the keys to launch a nuclear missile. Same thing. Let's say you have a secret that activates a bomb that strays a lot of stuff. You don't trust one person. So you want to give to A, B, C, and D little bits of information of the secret such that alone these little bits of information don't mean anything, but together they can be used to reconstruct the secret. So we're going to call these bits of information shares. S, A, share, B, share, C, and share, D. Now, I'm just going to write down in terms of generic algorithms. I'm going to say I have an algorithm, share, that takes in some randomness. I'll even omit the randomness, because in our constructions we just sample we don't need to set specific randomness. We just assume it has internal random coins. The share algorithm takes in a secret and outputs several shares. In our case, I'm calling them A, B, C, D. Then once you have those shares, the guarantee here is that you define, I'm defining here a very specific kind of secret sharing, a T out of N secret sharing. What does it mean? If I have T plus 1 shares, I can get the secret. If I have T shares or less than T, I cannot get the secret. How do I reference getting the secret? I'm going to say I have a reconstruct algorithm that takes in several shares. I actually changed this because this gets hard to reason about. I'll say we have shares S1 to SN, just because I need to tell you the number of shares you have. This gets shares S1 to S1. What I mean here is that you can get any T plus 1 shares out of the N shares. Any of them are okay. I don't need specific ones. That will give you the original secret. As I told you, this is a very special kind of secret sharing I'm talking about which is threshold secret sharing. That's what Shamir introduced. What does this kind of secret sharing allow you to do? It allows you to give you many shares, pieces of information about the secret to several people. They can only reconstruct it if a large number of them comes together. If more than T of them come together to reconstruct the secret. If it's less than T, they get zero information about the secret. They don't know anything. These days we know that you can actually do this such that you can only reconstruct if you get very specific shares. We call this an access structure. It's a mathematical object that defines which specific shares you need to be able to reconstruct the secret. I could, for example, build a scheme where the secret only gets reconstructed if I get share 1, share 5, share 17, and share 3 with arbitrary numbers here. I could define such a scheme and we know how to build it. We know how to build this for any access structure which is what tells you when you can reconstruct. But for our case we only need the simple T out of N. You might be thinking how do you actually build this? To build the more complicated T out of N where any subset with T plus 1 shares reconstructs things you need a bit more involved construction which looks like a Reed-Solomon code if you're familiar with that kind of coding theory stuff. Today we know that all these secret sharing schemes are very tightly connected to codes, to error correcting codes and coding theory. But I don't want to get into the details of this because those constructions are complicated. You have to define a right kind of Reed-Solomon code or generalizations of that with multivariate polynomials to actually build this. I just want to show you a very simple example of how you can do what would be a N out of N where you set T equal to N where it can only reconstruct if everybody comes together. I'll show you here. So we start with S. We want to get share SA, right? We say the share SA is equal to S plus R where R is some random string. Now we say that share SB is equal to, let me put here, RB, where B is a random string. We say that share SC is equal to S plus RC where RC is a random string. Now finally we have share SD. Share SD is equal to all, just so it makes sense. I'm going to say I'm working on a binary field here and these are actually XORs. Now SD is going to be the XOR of all the randomnesses. No, did I make a, I might have made a mistake. Because these things depend on the number of, no, did I make a mistake? No, I didn't make a mistake. It's the right number. When you write these things for many parties, it's the, so let's see what happens. These are the shares. These are my sharing algorithm. It samples this RA, RB, RC, compute shares ABC just by XORing the randomness with the secret. Then the final share is just XOR of all these random strings. They are the same size of course. Now how do we reconstruct? We simply XOR SA, SB, SC and SD and we know it's going to be equal to S. Why? You XOR SA and SB, S XOR S goes away. You have RA XOR RB. Now you XOR RA XOR RB with SC. S comes back. Now you have S XOR RA XOR RB XOR RC. Finally, you XOR SD and then RA RB and RC go away and you're left with S. That's a very simple additive secret sharing scheme where you need all the shares to reconstruct. But please believe me that you can actually build this for any access structure, specifically for this case any T out of N, Tresol access structure. There's the paper. It's actually one of the shortest papers I've ever seen in my life, the original secret sharing paper. It's two pages long. You can read it really quickly. You know a little bit about Reed Solomon codes. You read it in 15 minutes because it's a very short paper, but still a very deep result actually. So if you're interested, you can take a look at that. It's actually quick to understand when you read the paper. Shamir writes much better than I can explain you. So you can find information there. Now what is the problem here with this notion that I've been constructing of secret sharing? What if the guy who's computing the share algorithm here is malicious? What if the adversary is computing the shares and then the adversary gives us, let's say we're all here and I'm giving you shares and I'm a bad guy, I could give you shares such that if Mario, me, and Tanaka-sensei get together, we get one secret. Then I do something else. I give different shares such that if Ishida-san and Yamada-san and me get together, then we get a completely different message. Or I give you shares such that if I give you one and Yoshida-san and Harada-san. So I want to learn your names too. Good, I just want to try to remember. If I give one Harada and Yoshida-san shares, then they get nothing. When they try to reconstruct, they get error. They find an error and they can reconstruct. So it could be a bad guy doing that and there's nothing preventing me from doing that with Shamir's secret sharing based on Reed-Salomon codes or with this secret sharing. I could give you a specific SA as BSE such that when you exhort them, something bad happens, you get an error, you get some value that doesn't make sense. In order to deal with these situations, people came up with verifiable secret sharing and a little bit of motivation. Why did people even think of this situation where you have a bad guy giving out bad shares? One of the original applications that people had in mind when they were looking into these problems was secure multiparty computation where you have a lot of people who have inputs and they want to compute an output, a program on these inputs and get an output without revealing the inputs. That's secure multiparty computation and in the late 90s, people were looking into what you could do. Can you make this secure when you have a third of the parties corrupted? Can you make it secure when you have less than half? Can you make it secure when you have more than half? So the answer was if you have an honest majority, meaning that situation where half of the parties plus one are honest, you can do this secure multiparty computation and in 1989, Rabin, not Michael Rabin, but his daughter, Tao Rabin and Benoit introduced a protocol for doing this MPC with an honest majority where almost half of the people can be corrupted and to do that, they introduced verifiable secret sharing as a tool and started developing the theory. So what does it allow you to do? It gives you an extra algorithm here. There are different ways of defining it. I just want a simple definition that we can use in the protocol. It has this very fine algorithm. Let me define it in a different way. I don't want to talk about the verification directly. What you can do here is that you extend the reconstructed algorithm right here. Now, instead of only adding t plus one values, you say that the reconstructed algorithm gets all values. But then you think, that doesn't make sense. Why does it get all values? It's not t out of n, then it's n out of n. No, here some of these values can be empty. I could have, let's say, S1, S2, S3, blah, blah, blah. And I could, when I'm reconstructing, now this is an interactive procedure. The different parties talk to each other to reconstruct. They can simply say, let's say S2 was a bad share. You set it to perp empty. You don't consider it. Let's say you've got some crap here, random, random trash. You just put here the random trash. But as long as you have t plus one shares that are correct, they're correct, you can still reconstruct. And the parties can talk to each other. They can come together and talk to each other such that if you have t plus one honest parties talking. And you get that if you have more than health of honest parties, you just set the t to be health. Then they can always get, the guarantee here is that you always get S. Even if you have an empty share, a share with random stuff, even if you have these adversarial shares. That's the guarantee of verifiable secret sharing. And well, in some schemes, if you cannot get S, you won't be fooled. You won't get a value that you think is S. No, you'll get error. You will detect that's a weak verifiable secret sharing. You will detect there's an error and you abort. This guarantees that the adversary cannot cheat on you and make you believe you have the right secret without really having the right secret. Ah, yes, yes. That's always if you have... Yeah, you give an input. But let's say we are in a case where we are only t plus one people getting together. We put in our inputs. But the other one, you use it anyway and the algorithm, the protocol takes care. If you think of Shamir secret sharing, the reconstruction is very simple. You do it locally. Once you get the shares, you run some interpolation like in Reed-Solomon decoding and you get the share. In verifiable secret sharing, this reconstruct, it's not necessarily local anymore. It might require communication between the parties getting together. Apart from just sending the secret, you might need to send some extra verification information. I don't want to get into the details because to actually define this in all the details, I would take some hours. Ah, yeah, yeah. We can say reconstruction is going to take these many messages exactly. We can say that and it's not many messages. It's usually two rounds, three rounds for reconstruction because you need to do... What you do inside this reconstruct here that I'm showing as a black box is at least t plus one people come together. They exchange their shares and extra information. Let me just call it extra information. For the other shares, you don't care. You input empty shares or random stuff and the protocol makes everything happen. You use the extra information to check that the shares are correct and correct them if they are not. If you have t plus one parties, you get your message and you know it's... You leave it empty or input a random string. It depends on the actual construction. I'm just saying you get all of them because you input something. But if you don't have the actual share, you just put some random stuff. And the algorithm will take care of how this gets handled. How can you construct this? It's a bit more complicated. You generalize read Solomon codes and have... In read Solomon codes, you have a univariate polynomial, a polynomial on x of a lower degree. In this case, you need a polynomial on x and y, a multivariate polynomial with higher degree that allows you to get this extra information that you will use to check the shares. I'm not going to tell you how to construct this because it takes a while. The nice thing is we know how to construct this in many different ways. We can construct this based... We have black box constructions of verifiable secret sharing from regular secret sharing from the easy regular secret sharing I told you before. We have specific constructions based on specific algebraic structures. We have constructions based on codes, directly from codes. We have many constructions and it can be done efficiently. I just want you to believe me on this. I can point you out to papers about this if you want to read the constructions. But it can be done efficiently in a small number of rounds. Even the reconstruction, even though you need communication, it can be done efficiently. The takeaway message here is that with this verifiable secret sharing, if you achieve the threshold, you get the right secret. The guy who shares the secret cannot cheat on you. The worst that can happen if he cheats is that you detect an abortor protocol in some constructions. But you know that when you get a secret, that was the right secret. Now, I told you all about this secret sharing thing and how this verifiable property means that you get the right secret. Now how do we use this to build protocols where we can get the output even if a guy drops out, the guy stops working on the protocol or even if there's an adversary actively corrupting a party and doesn't send you the message you need. So how do we start with this? This is also a technique that started with Rebin and Beno in that paper how to use VSS for this kind of stuff. Let me show you. It's pretty simple for our case at least. It gets complicated when you actually want to do this transformation for big protocols. It's complicated, but for the coin tossing, it's okay. Let's say we have Alice, Bob and Charlie, and they want to do coin tossing. You remember we had that template for coin tossing from Bloom? That's basically easy to extend to any number of parties. How do you do that? Now you say that Alice commits to SA. I'm going to use this for, let me use a different letter. V, VA, Charlie commits to VC and now Bob sends them both VB and then they open the commitment. So it's just the protocol I showed you before between Alice and Bob commits to VA. Those are the messages, just random strings. If you look at just this, it's just like the other protocol. Alice commits to a random value, VA, then Bob sends her another random value, VB. Now she opens the commitment to VA. It's just a lot more arrows because we have more people and we have to send double messages to everyone. Yeah, pairwise. They're committing pairwise here. You could do it differently. You could actually broadcast one commitment, but I don't want to keep it simple. Then it's basically the protocol between Alice and Bob I showed you before. Commits a random string, send random string in plain text, open other random string, XOR everything. Did you get the idea of replicating the protocol that you do this protocol between everyone commits a random string, open random string, send random string in clear? So if you have the idea that that's how the protocol is going to run, I don't even want to talk about the commitments or the messages in the protocol. I want to talk about what you do before the protocol starts with the inputs, with the VA, VB and VC to get guaranteed output delivery. That's a generic transformation idea originated in Rabin and Benoit. Now people actually proved that this idea can be extended to any complicated protocol. It's a nice paper by Yehuda Lindel and I think if Tach Haidna and some other guys in Israel from 2006 you can use the same technique. So what do you do? Alice, let's say Alice is going to use her VA, Bob is going to use his VB and Charlie is going to use his VC, correct? So what Alice does before the protocol starts with the commitments, Alice is going to share VA. She's going to send to Bob, let's call it VB, VBobA, a share of VA to Bob and to Charlie V, CharlieA. Charlie is going to do the same. He's going to share VC and send V AliceC to Alice, V BobC to Bob. Now we actually don't need to do this but I'll also say that Bob shares VB just to make it symmetric. Bob shares VB and sends to Alice, V AliceB to Charlie, V CharlieB. What happens now? Now everybody has a share of the other player's inputs. So now Bob has a share of Alice's input and Charlie has a share of Alice's input. So they can come together and reconstruct in this case. I'm showing the three-party case. So this can be extended to any number of parties. You could have any number of people doing the sharing. Then you simply split your input into shares for everybody else. Now if we have honest majority, if we know that half plus one of all parties are honest, we know that they cannot retrieve the input before the protocol starts. Because let's say here that we have honest majority. It means that at most one of those guys is corrupted. So if at most one of them is corrupted, he cannot go to another guy and say, hey are you corrupted too? Let's get together, retrieve inputs of the other guy and cheat on him. You can't because you're assuming that you have honest majority so at most one is corrupted. So let's say that Charlie is a bad guy. He doesn't like Alice and Bob. And after we do the sharing, we do the commitments and openings for the random values. And Charlie never opens his VC. He gets VA, he gets VB from the openings. He computes the outputs. Then he goes away, says bye, I don't like you. I'm not giving you the opening to VC. Now what can Alice and Bob do? Alice has VLC, Bob has VBobC. They have two shares of Charlie's input. They can come together, send the shares to each other, VAC, VBC. They can reconstruct Charlie's input and they will get the same output. Problem solved. We get guaranteed output delivery. If we have honest majority and the bad guys, the dishonest minority, the less than half people that are corrupted try to cheat on us by not giving us their last messages or by giving us a wrong message, we can always use the shares that we had off the bad people's inputs to reconstruct their input and run the protocols in our heads. Let's say if we have the input and we saw all the protocol happening, we can just use that input and have a virtual machine running our head with that input as the bad guy and run the protocol again. And we get the input because we verifiably secret-share the input in the beginning. Now why do we need, oh that was the microphone, why do we need verifiable secret-sharing? Because let's say I'm a bad guy, I'm Charlie and I'm planning to cheat. I start the protocol knowing I am going to cheat on Bob and Alice. I want the output and I don't want them to have the output. He could just share this maliciously. He could give Alice and Bob invalid shares of his inputs and say that, well this is a valid share, then disappear when they try to reconstruct, they don't get anything. So you need verifiable secret-sharing so that you are sure that even if Charlie is a bad guy, you can still retrieve his input. If he tries to cheat on you. So that's why we need all this verifiable secret-sharing machinery. Then we can transform basically any secure protocol into a protocol with guaranteed output delivery. Why do we need guaranteed output delivery in this coin tossing protocol? Because inside the proof of stake protocol we are going to use this coin tossing with guaranteed output delivery as the main piece for keeping the protocol running. During the protocol we will need randomness. This randomness can't be known in advance. So I can't even, this randomness needs to be renewed. I need new unknown randomness after every period in the protocol. I'm going to show you later. I can't just tell you, look at electromagnetic radiation and learn randomness. I need to get this randomness from somewhere. And I need to be sure that no adversary can trick you into accepting any randomness that the adversary knows. Because then he can steal your money. And what an adversary could do? He could be a bad guy like Charlie, start the protocol with the commitments, never open his commitment, get randomness, use his knowledge of randomness and the fact that you didn't learn anything to steal your money. So we need to have guaranteed output delivery to make sure that everybody gets uniform randomness. Since we are working on an honest majority situation, then we're good. We use this technique, tank style, and then the problem solved. Also if you want to read more about how this product, this VSS works and this transformation works, I can point you to the papers where they discuss that. In this case we're using just a very specific case of this for coin tossing, because that's what we need for the protocol. So this was what I wanted to introduce first, because after you understand the coin tossing with guaranteed output delivery and you understand that you can build this from commitments and verifiable secret sharing, then the protocol is much easier to explain. Then we just use this as a black box. So let me just put here a simple diagram showing what I want you to remember from all these if you have to remember something. Basically what I want to say here is that the test in the audio, does it work? Oh, good. So I just want to finish this first half with an outline of all these things I told you about and how they connect to each other to get to the final black box that we are going to use inside the protocol. So we started with commitments, then we talked about verifiable secret sharing. As I mentioned to you, you can build this in a black box way from secret sharing itself. The easy kind that doesn't have any guarantees. This alone implies coin tossing with that three move protocol I showed you, commits a random value, send random value, open random value. This is implied coin tossing, but still no guaranteed output delivery. Now we combined both of these things, that was a bad arrow, to get guaranteed output delivery for coin tossing. And this is what we are going to use in the protocol. In the protocol we are going to be doing all the time saying something like call, coin tossing with guaranteed output delivery. Let's imagine this big black box is going to be there in the sky somewhere. God, after all, we are going to be able to call this and get uniform randomness for everyone. And just because I didn't want to tell you this exists, believe me, you know that this can be constructed in a black box way from, well not really, it is black box in the end, you just VSS the inputs, from coin tossing that can be turned constructed from commitments, plus verifiable secret sharing that can be turned constructed from secret sharing. Well, commitments themselves can be constructed from a huge number of things. You can tell you from some important things is OT, public key encryption. Oblivious transfer, sorry. I work with this so much that I forget to explain what oblivious transfer is. It's another primitive. We can talk about this another day, but I want to say that you can construct this from so many things. And even pseudo-random generators. What I want to say here is that these things can be constructed very efficiently. So you can even construct this with a pseudo-random generator if you're in the standalone model. There's a beautiful result by Moni Nao in 91 that shows how to construct this in a very, very efficient way. I just want you to know that these can be constructed efficiently. So this is the takeaway message for this first half. We can do got coin tossing combining these things, and I hope you understood a little bit of how these different building blocks work. And then we use them to build the final protocol just using this as a source of randomness. Thank you. I guess we can break for lunch. That's okay? Okay. So now that we've already talked about how we're going to obtain randomness for this protocol to work, then I want to tell you a little bit about why we want to consider this mechanism called proof of stake, why it's good, how it is different from what people are doing in Bitcoin already and in other cryptocurrencies. So first of all, let me just acknowledge people that were involved in this work. This was a collaboration with Aguilos Chiayas, who's a professor in the University of Edinburgh and the University of Athens. He's the K in that GKL paper on the Bitcoin Backbone Protocol and Alexander Russell and Roman Olnikov and Nikos Janis Tiseleukunis, Aguilos student with a complicated Greek name. But that's a project that we've been working on for the past two months or so, and right now we have a final protocol and proof that we believe to be correct. And I want to show you where we are at now and what are the next steps into turning this into a functional practical cryptocurrency consensus protocol. So first of all, you know that Bitcoin uses a consensus protocol based on this mechanism called proof of work or POW as I'm abbreviating here. So we need this in order to get distributed consensus that has some incentive for people to devote energy and computational work to in return for money. You know, when you mine on Bitcoin, you get a base coin transaction that gives you new coins that you get as a reward for mining and finding a block. Now, what happens with Bitcoin that is a bit annoying? First of all, there's a clear distinction between coin holders and miners. If you want to mine, it doesn't matter if you actually use the system to perform transactions with your own financial resources, you just have to have computational resources. The people who actually have lots of money invested in the system, of course the miners invest money in hardware, but they don't necessarily have to have lots of Bitcoin or the actual currency. The people who actually have lots of coins in the system don't have much control unless they also invest a lot in computational resources. There's a problem that you know that by design the rewards you get for mining in Bitcoin are decreasing. Every number of years, the amount of coins that you get back if you find a block gets divided by two. It's halved. It happened this year already and what is the problem? Once this amount of money you get for mining gets really low, then the incentive for people to buy a very expensive hardware and actually do the mining will also go really low. So how do you keep the system working? How do you keep people wanting, how to make people want to actually invest resources in mining if they are not going to get that much in return? So this is a problem and that's something that happens by design in Bitcoin. One of the biggest problems, the control of the Bitcoin network of the Bitcoin system is extremely centralized in the hands of very few very powerful miners. The people who have most of the mining equipment they can actually decide what happens with the system because they control how many blocks are being found per second and they control what goes in the blocks that they are generating. And we know by now that a handful of people control almost half of all the computational power or even more than that, most of them are in China. That's a statistical, some are statistical data. Most of the mining power comes from China and there are around five mining pools that control basically half of the whole computational power of the system. So decentralization is bad. The whole idea of having Bitcoin, of having blockchains and cryptocurrencies is achieving decentralization and not being forced to trust in one authority or let's say a group, an oligarchy of five authorities. I mean there's miners with all the power. So we have these clear problems that are affecting the Bitcoin environment. Already in a practical scale. The problem of diminishing rewards is really serious because if you think ten years from now, really what will be the incentive for somebody to spend millions of dollars buying equipment if he's not going to get the same kind of money in return? Where are the profits? So why will people keep mining Bitcoin? What happens with the network then? And it's scary that just a few people control the network basically and they can decide what happens. Do we want to trust just a few people? No. It's not nice. We want, through decentralization, we want real incentive for people to mine or do whatever the system requires to stay active. So then the notion of proof of stake showed up and I want to compare both notions. So you see how Bitcoin with proof of work and how a currency based on proof of stake compare to each other. First of all, what do we mean by proof of stake? In a proof of stake mechanism, the amount of money you have in the system is what determines how much control you have on the system. The more money you have, the more blocks you should be able to generate. In proof of work, the amount of computers you have determines how much control you have on the system. The more hash functions you can compute per second, the higher your probability of generating a block. Now in the POS style system, the more you have, the more your control, the more you have money in the system. But you can think of it as a stake in a gambling game. You have a lot of money which gives you a lot of control over the system. If you do bad things, if you try to cheat, if you try to attack the system, people will lose trust, start selling their coins. Your stake, your large amount of money in the system will decrease in value. So the incentive to run the protocol, honestly, is basically that you're going to lose your money if you use your control in a bad way. Now let's see some points here. That's what I was saying. In proof of work, this is an actual miner. He owns a big mining pool and those shelves are full of boards with azis with integrated circuits designed specifically to compute many hash functions a second. And he actually said once in an interview that he wants to control Bitcoin. He said that I want my mining pool to control half of all the hashing power. He didn't succeed by himself, but as I told you, you have these five mining pools or so that do control half of the hashing power. So here in a proof of work situation, more computational resources equal more control. The more of these boards you have, the higher the probability you generate a block. So you get more control. This is a bad thing, but there are also good things. I don't want to tell you that proof of work is bad. It's completely bad. We know it works in practice. It's running. Bitcoin is running. The incentive structure works. People are actually spending money on buying these azis to mine Bitcoin because they get something in return, at least until the rewards get halved so much that they don't really get much in return. We have a security analysis. So that's nice. We can prove that these works, that this is secure according to a reasonable security definition. So we have provable security for this. It's good. We know it works. We know it is secure. It's not just a heuristic argument. One huge problem, resource waste. This is just a huge waste of electrical energy. Running all these azis, all these little boards, it spends a huge amount of money. There's actually an anecdote of miners in a specific city in the US where there was a special deal with the electric company that they sold really cheap energy. Then a lot of miners moved there and started running their equipment there. In the end, the electric company raised the prices because they were just depleting all electric resources. I've also heard that some of the miners in China tried to stay located close to power plants so they can get good deals because they need a lot of energy. What is this energy used for? For nothing. You're computing hash functions on random inputs. They're good for nothing. You're not achieving anything. It's not any calculation that can be used for any other purpose rather than generating a block. Only one of these outputs will be used. All the billions of other outputs will be thrown away and the energy is just lost. These days, with all the worry we have about the environment and energy waste, this is a really big problem. At some point, we can't just keep growing mining power and hashing power like that because we have serious energy limitations. Now, PLS. It's a different situation. The more you have to lose if the system fails, the more control you have. That's the idea. If you have more money in the system, more coins, you get a higher probability of generating a block. But if you have more money and you do something bad, the system crashes, you lose that money. So you've got to really think if you want to do something bad, you want to follow the protocol so your money keeps its value. Now, what I'm going to show is this is a situation before our work. Then I'm going to show you how we solve these problems. Until now, this was a nice idea. It had been proposed first in forums by practitioners, but there was basically no real understanding about this or people using this in practice. There are several coins that implemented proof-of-stake style mechanisms, but they aren't as widespread as Bitcoin and other altcoins. There were smaller experiments. They didn't really grow. Even worse, security. The previous schemes based on POS had no proper security analysis, basically because we didn't even know how to define security for the systems well enough to prove that they are secure. As cryptographers, I know you were aware that you need a security definition before you can say anything meaningful about a protocol or a cryptographic scheme. So until now, we didn't even know how to define security, much less prove it secure. All the protocols that were proposed were heuristic, and for many of them, there were already practical attacks. So the situation there was terrible, and that's something that we're going to address. One really good thing, less waste. Here you don't have to waste a lot of energy to generate more blocks. You just have to buy more coins. That's the thing. Now, a bit about what happened until now in terms of POS-based cryptocurrencies. There are some, as I told you, like next black coin, pure coin, and noy coin that implemented some flavor of proof of stake. Let me tell you a little bit about some details of some of these implementations and how they implemented the POS mechanism and why it was a problem. In pure coin, you have a situation like this. The hash you compute is basically the same as in Bitcoin. You compute a hash of the previous block, current time, and your current output, your current state of the blockchain, and it has to be smaller than a target, D, just like in Bitcoin. But then the new thing that implements the POS here comes in the second terms. There's a target that's fixed. Let's say it's small. But the more coins you have, the more this right-hand side of the equation grows because you're multiplying the target by your number of coins, and you also have this time way of your coins, which means the longer you have possessed the coin, let's say your input transactions that gave you those coins are 10 years old. They have more weight. Then you get a bigger value here than somebody who just got some coins a minute ago. So basically it makes it easier to find a new block, to find a hash whose output is smaller than the right-hand side of the equation if you have more coins, and in this case if your coins have been in your possession for a long time. Now, what is the problem with this approach? First, this situation where letting the time that you have possessed a coin influence the probability with which you find a block opens up the system to some attacks. One of the attacks is pretty simple. Intuitively speaking, you can get a coin or a few coins, not much, leave them there for a very long time, wait for a moment where people who are transacting a lot of coins really fast and not keeping their own coins are generating blocks, and then you use your larger time-way. Even though you have less coins, even though your number of coins is smaller, you use your very large time-way to get a block faster than those people who are actually doing more transactions than you. They have more money flowing, but they transact a lot. Then you get to generate blocks with double-spending and so on. And there are other problems like bribing. You can collude. Several people with several coins can easily collude here to aggregate their number of coins in time-way so they get a higher probability of jointly generating a block, and they can't use that to do attacks. So there are several of these problems. Later, they've proposed getting rid of this time-way parameter, at least, but still, just doing this approach here doesn't really let you prove security of the scheme. It's very hard to actually prove that, first of all, even if everybody is honest, that the probability that you get to get to generate a block is actually related to the number of coins you have and how exactly this relation works. It's hard to show that, so other proposals showed up. Neucoin got that approach, and from the start, got rid of the time-way. Now, the only thing that Neucoin uses is the number of coins that you have. They also introduced faster block generation, but then we have another problem. As we know from empirical studies and also from the analysis in the GKL-15 paper, the network delay affects how much security you can actually get. If you generate more blocks and the delay stays the same, there's a higher probability that somebody can cheat in the protocol. So this might introduce some problems. This introduces more rewards in Neucoin, and it has a mechanism for punishing people who try to do malicious forks and double-spending. If somebody gets detected doing this, they lose a certain amount of money. But still, formal analysis, just an empirical protocol and empirical implementation. Heuristics. Now, in those schemes, where do the coins come from? It's a bit complicated, right? Because if you add block rewards, if you say, if you generate a block, you get a big reward like it happens in Bitcoin. What will happen is that the richer will get richer. Because the richer users with more money generate more blocks. And then they will just get more money because they generated the blocks. And in the end, only very few very rich users will be generating blocks. We will end up with centralization. So there are some mechanisms for post. Apart from the standard take transaction fees and the transaction fees are your source of reward, there are some ideas. These guys in Oil Coin and Peer Coin, they had this idea of adding a reward for block generation that is proportional to the number of coins you already have times the number of days you've been idle, the number of days you haven't generated a block. So let's say if I generated a block a year ago 365 days ago I get the full reward number of coins plus a coefficient that shows how much percent of your coins you get is a reward. But if I generate a block every day I only get one 365 over 365 of the reward. So if I'm generating many blocks over a whole year you can see that I will get the same number of coins as reward as a person who generates only one block in the whole year. Because here it's proportional to the number of idle days. So the idea here is that everybody over a year makes the same amount of money in rewards independently of how much money they already have. They want to eliminate the richer-getting-reacher problem with this approach. In Peercoin this percent here is one percent. They just give one percent of the number of coins you have. Now in Noicoin they have a diminishing reward system where this starts with 100% of your coins and as you earn more money use the system longer it declines slowly to 6%. The idea behind this is that they want to keep the incentive high enough so that people buy coins in the system and keep the system working. Now still those models are purely heuristic protocols as you can see from the previous equations they are a simplistic implementation of proof of stake. You simply multiply the target value by the number of coins you have. Intuitively heuristically the probability that you generate a block grows with the number of coins. It's easy to see that. But still showing the exact relation how this probability actually relates to the number of coins is not so easy. It's not so easy to show that you get selected with uniform probability according to the number of coins you have. It's not that easy. So we have these problems. Now something came up that works interestingly. It's a paper that is still mostly heuristic but they try to formally prove that some attacks are in the answer system and they introduce some interesting ideas. First of all they have a randomized selection of miners. The people who get to choose get to generate a block are selected by extracting randomness out of the blockchain using an extractor and using this randomness to run a procedure called follow the satoshi which I'm going to explain in detail later that this procedure if you give it uniform randomness it will select the stakeholders with uniform probability proportional to the amount of coins they have. This procedure does work. It's really easy to see that it does select the stakeholders that will generate each block with the probability that is directly proportioned to the number of coins. But for that you need the randomness. What they propose here uses extractor. What is complicated is you can use an extractor if you want to give an asymptotic argument you just say you let the blockchain grow enough and when it's long enough you know that the extractor is going to extract the randomness. You assume the min entropy is high enough. But if we're talking about actually implementing this with concrete parameters then you have to ask yourself when is the blockchain long enough when there's enough min entropy in the blockchain to guarantee that when you apply the extractor you actually get uniform randomness. It's hard to have requirements for this kind of application. It's very hard to compute concrete parameters. So we were left with this problem. But still they introduced this very interesting idea of how to actually select the minors randomly with the proper probability. They have a formal analysis. They say they don't have a security definition that they prove that they used to prove their system secure but they have several attacks and then they prove that given certain assumptions, for example that you have an extractor those attacks don't work against the system but still no formal model. They try to estimate concrete parameters for the, not for the extractor but for this randomized selection of minors. They implement this follow the Satoshi scheme and they see how fast you can run this, how well it scales. By the way, the paper here is by Bentov at all. It's not published as far as I know yet but it's on the archives. It also has, the scheme also has a nice way to punish malicious forks. You can take people's money away if you see they're cheating, which is a good way to incentivize people to not cheat, to be honest. Now the initial distribution of money is unclear they don't really set a specific way to distribute the initial coins. What I can tell you is one way that people have been doing this in practical schemes is by doing an IPO initial public offer, you pre-mine a bunch of coins and you tell people well I have this bunch of coins, you want to buy some of them before the blockchain starts running, before the beginning of time for the blockchain you distribute the initial provision of coins between different users and then you run the blockchain. They also propose using an initial proof of work scheme to distribute the money but it's not really clear and for certain attacks they require the users to create a web of trust between themselves I mean they have to trust each other or trust the central service in some way that puts some watermarks in the blockchain that say behind here nothing will ever change again or that confirm that certain blocks were generated in a certain way. You have extra trust relations which are in the end extra assumptions that we want to avoid. Now given all the situation or the problems that we want to address here first of all formalize POS formalize what it means to have a secure POS scheme and actually construct one that we can prove secure and this one we solve another very interesting problem that seems very difficult to solve given an actual game theoretic security analysis showing if we can with some protocol I don't know if ours but maybe another protocol reach an equilibrium if everybody plays honestly it's complicated because the protocols are very complicated incentive structure is complicated then if you try to look at it from a game theoretic point of view then the utility functions you have to define are very complicated we still haven't even looked into that there are people looking into doing game theoretic analysis on cryptocurrencies but here we're just going to do a standard non-rational crypto proof come up with better protocols addressing attacks to current protocols getting better parameters is strong security guarantees yeah we came up with a new protocol that is immune to the attacks that have been shown up to now that has nice parameters we're still working on estimating the concrete parameters but everything shows that they will be very reasonable and will provide for fast protocol and we do get stronger security guarantees now I want to show you how the protocol works and I'm going to show you how to construct something similar to the bitcoin backbone protocol as defined in the GKL paper so I'm not going to show you a specific cryptocurrency I'm going to show you how to build a consensus protocol that lets users agree on records they're ordered in a specific order and that are immutable after a while from that you can write anything on those blocks you can write transactions for cryptocurrency you can write messages that you want for which you want Byzantine agreement it's just like the bitcoin backbone protocol you use it as you like one of the very good applications is cryptocurrencies so first I want to show you the follow the Satoshi procedure from Bent over all that allows us to select a user among all users with probability that is proportional to the number of coins that the user has so basically you start with a hash function that takes in a random seed and outputs a number I that is between bigger than 0 and smaller than the total number of Satoshi what is a Satoshi here it's the smallest unit the smallest monetary unit in the system in the cryptocurrency like cents so you select a number that identifies one of these Satoshis let's say you have a million such Satoshis you select a number let's say 153 721 and then you know this Satoshi let's say that's a unique identifier for the Satoshi intuitively of course you have to actually create a mapping between the Satoshis in the system and this indices but it's basically easy to do so just showing the general idea you select the Satoshi by selecting this random number that comes from a seed that you input in this hash function or alternatively you could also just have a seed that has the same number of bits that you need to represent all Satoshis so let's say you have a million Satoshis it could just sample a random number between 0 and a million and use that number to identify your Satoshi then let's say you know how you selected this random Satoshi like that from a random seed and now you find who currently owns that Satoshi you look at the blockchain you traverse the blockchain you find who was the last person to receive that Satoshi as a transaction output and that person identified by their address is the winner of the follow the Satoshi so you can see here that we are selecting one of the users at random and the probability that user gets selected is exactly equal to the number of Satoshis he has divided by the total number of Satoshis so we get the exact distribution probability that we want the more money you have larger probability that you get selected and now this person gets selected he gets to generate the next block so it seems basically problem solved right we can select somebody who gets to generate the Satoshi it's publicly verifiable everybody who is verifying the blockchain can run the follow the Satoshi again assuming this random seed is public anybody can run follow the Satoshi find who was supposed to generate the block that block was actually generated by that person so it seems problem solved we can just apply that but how to generate the randomness that's one of the big problems where do you get this seed from in the bent-off at all paper on chains of activity they say apply a non-oblivious source deterministic extractor to the blockchain in theory it's perfect the argument works you can give an asymptotic argument you show if the blockchain grows enough you can get enough randomness and it's all good it all works but then as I told you how do you estimate in practice that there's enough min entropy in the blockchain for that it's a bit complicated so we want to get rid of that so let me just show you how using this problem given that you have the randomness plus standard blockchain techniques we can get a functioning blockchain based on POS before I tell you what we do about the randomness so first our protocol will be divided in epochs each epoch lasts for a number of blocks let's say that it lasts for n blocks for each epoch we're going to have followed the Satoshi parameterized with just enough randomness for this epoch in each epoch we say that we have a genesis block even if it's a virtual genesis block in the beginning of the epoch actually beginning when it's the first block of the whole blockchain when it's really the blockchain the genesis block for the whole blockchain then this block will be there on the blockchain and it will contain the following information user ID I'm calling the users or stakeholders here U1 to UN and the amount of money that they have before time began when I say before time began the epoch again when it's the real genesis block for the whole blockchain then it is before all time began in the mind of the protocol and then you have this information who has what cash and randomness in the genesis block for the blockchain when it started we will assume that this randomness needs written inside the block but as the protocol progresses this information of which user has which money will be already stored in the blockchain you can just read the blockchain before your current epoch and you will find this information so we don't need to store this block anymore and then I'm going to tell you where this randomness is going to come from for now let's assume the randomness falls to the sky and it's there for us and it's perfect randomness it's all good and it will be enough randomness to run follow the Satoshi for each block we will divide the time in this protocol inside each epoch into slots for each slot a block can be generated or a block might not be generated if the slot leader as we call them which are the people who get to generate a block for that slot are offline so let's look at how this works first we have this genesis block be it the real genesis block for the whole blockchain or some virtual let's say genesis block for the epoch whose information you can derive now using the mapping from user to Satoshis that we have stored in this genesis block and the randomness we run follow the Satoshi as I just explained in the previous slide so this will select a slot leader for each slot I'm calling them E here so the slot leaders are the users that got selected by follow the Satoshi with probability equal to their number of Satoshis divided by the total number of Satoshis so you have a higher probability to be elected if you have more money in the system once you are elected you can generate a block you are the one who has the right to generate a block and no one else can generate a block in that slot so once the list of users and the cash they own and the initial randomness are set for the epoch you determine deterministically the slot leader of each slot inside that one epoch and during the whole epoch we will consider that the stake doesn't move that the coins don't move we'll consider that they are as they were before the epoch started so let's say slot leader E1 was an online slot selected that's immutable because it depends only on this information that is set in stone but he wasn't online he could not generate a block because he didn't he missed his slots so no block gets generated well it's bad for him not to be online because he gets the transaction fees if he generates the block and if he's honest it's in his own interest to keep the system moving one good thing if you look at it is that in the beginning of the block you know when you get to generate in the beginning of the epoch you know when you get to generate a block you know when you're going to be as not leader because you know this information so you can let's say put an alarm on your phone saying hey now I have to go generate a block it's my slot let's turn on my computer now we run follow the Satoshi again for the third slot here and we select the slot leader this slot leader was online he saw he ran follow the Satoshi and he was happy yay I got selected I'm the slot leader I'm going to generate a block what does he do very similar to bitcoin in the block he's going to put the transaction information the transaction output so on a block header with a merkle tree root containing all this information does slots have fixed time let's say we say for example one slot lasts 10 minutes let's say the first slot begins at midnight at midnight 10 second slot begins at midnight 20 yeah physical time at midnight 20 this slot ends so of course we can have perfect block synchronization across the whole internet but small fluctuations don't really don't really affect this year we can work with a margin here what there would be a problem for example is if let's say somebody generates a block one second before his slot ends and then another slot starts and let's say this guy was quick he generates a block right when his slot starts and they send and it collides you keep the sorry you keep the newest you keep the newest block there are different rules you can do about this but the easier rule to also facilitate analysis is saying well you missed your slot I received a newer block already that is valid that is by a guy who should be generated in the block in the current slot so sorry your block doesn't doesn't get in well then of course you might get in a situation where you have a fork then we use a basic longest chain rule then the forks are going to get extended and people are going to choose to each fork but the rule is simple you extend the longest fork so in the end you're going to end up with the longest all the oldest people will see the longest one and follow the longest one that's a guarantee now what's in the block we have the transactions the transaction information I state information which is the hash of the previous block basically and the signature by the slot leader that's how the slot leader proves that he was the one who generated the block that he was the one that had the right to generate the block he signs all the block information with his signing key and then everybody can verify the signature and see that okay I run for the Satoshi with the randomness that is in the sky in this magic Genesis block I see that this guy E2 should be the slot leader for this slot and then I check the signature if the signature is valid and it's this block where he is the slot leader then it's a valid block and of course you also have to check the state information to see to which block he's linking because you might have these fork situations where a block gets lost or where an adversary sends a block twice so you need to check to which block the current block is linking so the protocol progresses like this for each slot you run for the Satoshi you find out who's the lucky winner who gets selected to generate the block this user generates the block mostly like in Bitcoin but adding a signature to verify that he was the one who generated so here we can see that the more money you have in the system the more blocks you get to generate with high probability with a simple turnoff bound you can show that the block generated will stick to the stake distribution the more Satoshis you have the more blocks you generate as we're running Follow the Satoshi with uniform randomness but still... yeah you need to know not for the Follow the Satoshi itself, not for selecting the Random Satoshi but you need to know who owns each Satoshi in advance so we need to fix this for the proof to work we need to know for each epoch that's why that's why we have this epoch, Genesis block that sets in stone who is in the system and who has each Satoshi that yeah, you don't actually need to put that block there for the only time you actually write this down in the blockchain is in the first epoch for the blockchain but in the next epoch you can just derive this information from the blockchain but for each epoch you will consider the... yeah and for each epoch you will consider the stake distribution of the end of the last epoch now, why do we do this thing of dividing the protocol in epochs and having this fixed stake for each epoch that's equal to the last epoch because we need to generate this randomness again for every ideally let's say for every slot we would have to generate new randomness so nobody knows in advance when there's lots are what is the problem of somebody knowing in advance the randomness is that this person will know which Satoshi will get selected in the future then they can go and buy that Satoshi so that's an attack then even if you don't own a lot of money there you buy the Satoshi that gets selected so that's why when we run an epoch when we run the Satoshi inside one epoch we consider the distribution in the previous epoch when this randomness wasn't known so nobody can look at the for the Satoshi results and say hey, I'm going to buy these Satoshis here and I'll be selected if you had those Satoshis before the randomness was known good for you, you get selected if not, you don't because the randomness will change for each epoch and now as I said the ideal situation would be generate new randomness for every slot but then again we spent the first half of the day discussing how to generate randomness securely with a multi-party protocol with guaranteed output delivery which we need because as you see here if I learn the randomness and you don't learn the randomness then you cannot run the fall of the Satoshi and I can cheat on you the chance of generating blocks than you so we need this guaranteed output delivery when we run a protocol here to generate randomness and this is what we're going to do to generate randomness that's what I show here in the protocol the whole protocol with multiple epochs we will use this guaranteed output delivery coin tossing as a randomness source that's in parallel with the blockchain protocol so we start let's say here's the start of time this is the blockchain Genesis block before this there was chaos there was nothing and then we do write down a block with all the user IDs the money they have and randomness that randomness will be on the Genesis block that's something you have to trust so you can say something this randomness is the hash of the New York Times in 1971 1st of January you can set some public randomness but it must be on the Genesis block then we will run the protocol I showed you before we follow the Satoshi for every slot and people generating blocks and signing their blocks and in parallel we will run this fair coin tossing protocol guaranteed output delivery which is constructed just in the way I showed you before we use a basic coin tossing protocol based on commitments the bloom protocol plus verifiable secret sharing to obtain guaranteed output delivery all the participants of the protocol will be playing all of them will generate the messages as I showed you before how does the protocol work I send you a commitment to a value you send me your commitment to a value and everybody here in the room sends commitment to a value and then after all commitments are received we send openings that's what we discussed before the parties are fixed and what do we have here a blockchain that works as a broadcast channel so instead of actually let's say we are all running the protocol I don't need to go around sending a commitment to each person in the room I can just shout here the commitment and let's say shouting is the blockchain you all hear the commitment because it will be written in the blockchain why is it also good to write this on the blockchain because once it's set in stone in the blockchain you can actually call out the cheaters and look into that but you can call out cheaters and you can be sure of all messages that are being sent even if you are not online at a certain point and you get let's say you are offline for half of the epoch then you come online you send your commitment then you go offline again you come online you read people's openings you send your opening because the messages for the protocol all those commitments I was writing down here that they sent with arrows to each one now they would just be in the blockchain and as you remember apart from the commitments we will also write down the shares of each person's sorry actually the shares we don't even need to waste space in the blockchain we can just send the shares to people locally or to of course when the system grows a lot we can just put this in certain blocks and then when people misbehave we can we can come together with our shares get the inputs run the exorbit when the input says in the bloom protocol get our randomness and that's our randomness set in stone for follow the Satoshi so the situation we arrive at okay I have a nice arrows here situation we arrive at is this virtual Genesis block for the next epoch that is actually determined by the previous epoch I don't really need to write down that block in the blockchain of which user has which money comes from the transactions they are written in the previous blocks the randomness comes for the fair coin tossing protocol but the messages for the fair coin tossing protocol are in the blockchain so I don't even need to be online all the time to run the protocol and at any time if I join the system I can read these messages on the blocks and run the protocol in my head get the randomness to verify the next epoch and then we start a new epoch same thing we run the epoch protocol with the with the follow the Satoshi generating each block, sign each block at the same time again fair coin tossing with guaranteed output delivery then what do we obtain same thing a list of users and their state distribution and new randomness and so goes the protocol now what we can prove about this we can prove that if this the actual proof is very modular first we prove that inside an epoch inside one epoch assuming you have randomness that falls from the sky we actually define a functionality that gives you a follow the Satoshi function you call the randomness or call the functionality it gives you a description of the follow the Satoshi function in randomness and you can select inside one epoch so we prove that inside one epoch considering that you have a fixed stake for that epoch and the whole list of users and a fixed follow the Satoshi that uses proper randomness that the probability that an adversary that doesn't have at least more than at least half the stake decreases exponentially so formally what do we prove we prove that we can achieve the chain quality and common prefix properties of the GKL paper so common prefix basically means that blocks in this case after a whole epoch goes by with very high probability there will be no forks everybody will have converged to one single chain basically common prefix chain quality means that after a while also let's say after an epoch is done there will be a considerable fraction of blocks that were generated by honest users again, intuitively since we have honest majority and we're doing follow the Satoshi we can show that with high probability there will be at least one honest block because we have honest majority and the distribution of blocks will follow the distribution of stake so we can show that and we also show an extra property that we need to define the chain growth that we have to show that the chain grows that even though there are some offline people sometimes people who don't generate blocks and so on have to show that it still grows and we can prove that now in our grand scheme of things we show that this epoch alone considering perfect randomness for follow the Satoshi and the distribution then we show that we can compose this and get and glue the epochs together that we can run the protocol for one modify the protocol for one epoch by running the fair coin tossing with guaranteed output delivery at the same time writing the messages in the blockchain and we show that by doing that considering that we have common prefix and chain quality we need to that's why we need to prove that first for the epoch because we need to prove that by the end of the epoch everybody will agree on the messages that were sent by the fair coin tossing protocol right? because we're putting those messages in the blockchain if the if the adversary can create a fork that goes over the whole epoch then he can make different people agree on different randomness which is an attack but first we show that he can't do that for a whole epoch which means that when we run the fair coin tossing using the blockchain as a broadcast channel then we can arrive at the end of the epoch with fresh form randomness to seed the follow the satoshi we prove that he can after the epoch is finished common prefix holds, that's what we prove he can generate small forks inside the epoch but once we finish the epoch we know that what is behind here there will be no forks that span the whole epoch that let's say cut 10 blocks out we have many before the first many blocks in the epoch are agreed upon by everybody that's what common prefix tells you and we prove that at least one of those blocks will be honest, not only one at least a constant fraction are we still calculating that because we have the proof in terms of asymptotes we have an expression that shows the probability that an adversary succeeds in generating a fork after a number of n blocks and we can clearly show that that expression decreases exponentially with the number of blocks but we haven't at least maybe Agilos is already looking into that but we didn't estimate that yet we will certainly have a minimum size of the epoch to have a it depends of course on the probability you want that the adversary cheats you want that he cheats with probability only 1% or probability only 0.0000001% but it seems from what we've looked at that this curve decreases very fast the probability that the success of the adversary decreases really fast with the number of blocks so we're confident that you don't need let's say a thousand blocks or whatever you don't need that many blocks to get a very low very low probability that the adversary succeeds but we're still estimating you need to break break the yeah so that's one of the next steps that I'm going to show actually estimating the concrete parameters no but it's a good question because we want to know the concrete parameters the good thing is I showed you the protocol how this protocol works it's not very complicated you send a message to commit you send another message to open you send along in the same round so it's not a complicated protocol it can certainly run inside an epoch so we don't have a problem like oh we're going to have to make I've heard some concern from practitioners they asked us but you're running this super complicated crypto protocol isn't it going to take forever isn't it very inefficient it's going to make an epoch very large no I showed you guys before the protocol takes 2-3 rounds it's quick and it's very efficient the verifiable secret sharing as I told you before is based on error correcting codes it's information theoretical it's extremely efficient you can also make it based on on computational assumptions of course depending on the trade-off between communication and computational power we can implement them based VSS based on efficient things with the curve multiplications the commitments as I also told you are extremely efficient you can implement them based on most public key encryption schemes or even using PRGs so we're good with that so we're combining we're combining two very simple things commitments and VSS that are efficiently constructed so the result even though it's something that might sound strong something that guarantees that you get perfect randomness it's not one of those very complicated asynchronous MPC protocols no it's something that you run in 2 or 3 rounds and that doesn't require that much communication and that can be implemented efficiently in terms of running fast in a computer or even a cell phone if you think of DDH commitments you do you get a group element a group base G G to the message then G to the some randomness that's a parameter times another randomness that's a commitment you're doing two exponentiation and you've got a commitment right there so it is something that can be very efficiently implemented that was one of the concerns I've heard before the factor that makes the epoch stretch a little bit more is actually being long enough to achieve common prefix because of the structure of the follow the Satoshi and this random selection of slot leaders and the fact that you have to account for people who are offline and the fact that the adversary can cheat in some ways that's what makes the epoch grow a little bit because we need to achieve common prefix but the protocol itself it runs as fast as you achieve common prefix basically because you need to send a message wait for common prefix you have to send the commitments and wait for common prefix so you assure that everybody's committed right otherwise you could cheat as I showed you before you know somebody's randomness before you send your commitment then you can choose your randomness in a way that you determine all the you determine the final result so we need to make sure that everybody's committed and to do that we need to wait for common prefix and after that we can just open sure in that way I see what you mean well you need to have an epoch no you don't have to ah I get your doubt no not everybody has to be chosen in an epoch if you don't have much money let's say if you have I don't know 0.001% of all the Satoshis you might just not get selected in an epoch overall over the whole blockchain it is over the whole blockchain over many epochs it should be the case that 0.001% of those lots are yours but maybe the 0.001% is so small you don't get selected inside an epoch but that's not a problem that doesn't affect the analysis what does affect the analysis in terms of number of users is of course you need how many how many users you have how many you consider that they are corrupted and how many you consider that you can also account for is corrupted so that's how it affects the analysis because you need to for the concrete parameters let's say if we have a million users and we consider that a third is corrupted then maybe we don't need that many lots to actually get some honest blocks to show up but if we have less users then we need more blocks to get let's say if we have less users and more corrupted people to make sure none of these blocks shows up that's more or less how it affects more or less in the analysis we basically consider offline people as corrupted then you deal with them that way now one interesting thing that's coming up sure please so that's something we're also working on you say in this case let's say much money so the probability I get selected is very small so I don't want to keep playing the protocol all the time because it's small so we're working now on something called delegated proof of stake where you delegate the power of generating a block whenever you get selected to generate a block to a bigger entity let's say the equivalent to a mining pool then you delegate your your signing power to them we're working on a solution based on proxy signatures that sound that seems natural for this problem well of course it incentivizes a bit the centralization right but in the same way in bitcoin at least in this case there's one clear advantage over bitcoin in bitcoin you have no control as a person who only has coins, you have zero control and you can't choose which mining pools get together or not in this situation you have the coins you have the power and you can choose which to which person you're going to delegate the signing power and if they start doing bad stuff so you also have an incentive to watch over the people who are centralizing this delegated signing power so you have incentive to actually check and audit these people to make sure that they're not doing bad things because then you will lose your money and you can easily just delegate and I don't want to delegate for you anymore I'm going to delegate to another person that's the idea of this scheme so that small stakeholders who don't have a large enough incentive to stay online all the time can delegate it to somebody and then we have enough blocks coming up that's one of the next works one of the future works that we're checking out right now I think we have a solution that works but of course we need to finish formalizing that I don't want to say for sure before I write down a proof but they don't need to be necessarily online all the time to run the protocol sure they have to come online at some point before half plus one half plus one then it's bad then it's bad inside one epoch inside one one given epoch we need the honest majority to be online inside one epoch yeah what I mentioned that they don't need to be online all the time is that the honest parties they don't necessarily have to be online all at the same time they can come online on one slot, write down their commitments go offline again come online again when they're going to generate their block generate their block read the openings what I mean is that they don't need interactive interaction necessarily they can write things on the blockchain go offline if they show up to contribute to the fair coin tossing protocol of course they have to show up in time because they are the rounds they show up in time to contribute and if they show up to generate their blocks then it's fine if they never show up and they don't generate in one epoch yeah for the whole epoch that's a bad epoch then security doesn't hold yeah for the beginning of the coin tossing for those commitment phase in the opening phase let's say we have the adversary that is bad and aborts because but to do the god thing in the opening then they need to have some direct interaction to do the reconstruction of the VSS then they need to interact but if you but that's one part in the end of the epoch then they need to be there to reconstruct the shares the secrets but for most of the protocol when you just send in the commitments and generating blocks then it's alright you can come online put your commitment in a blockchain go offline, come online generate a block but of course if you have a problem with the openings and you want the guaranteed output delivery you need the reconstruction of the VSS that will include interaction but even then it's short interaction that's the nice thing you need several rounds you don't need a lot of computation or communication it can be done in a few rounds only if that answers please, please, please we're considering that we have a static adversary that corrupts parties before execution begins it would be interesting to actually do and show that this is adaptively secure but then it's more complicated it's probably possible to show that the adversary can come in the middle of let's say the adversary comes in the middle of an epoch and corrupts a new party then what happens we haven't studied that case yet our proof is for the case where the malicious and honest parties are set before the epoch begins yeah let me think if the proof between epochs let me think if the proof works let me just think if the proof would work in that case it might actually work we didn't write it the way we wrote it was considering you have since it was the first attempt at proving this thing secure we consider the synchronous networks and static corruption from the beginning of the protocol but I believe that we can also prove that the corruption can change between epochs but inside the same epoch our proof doesn't work no way if corruption changes it has to be static corruption but between epochs I think it might actually be possible to modify or prove trivially to do that but I won't say for sure because I haven't written it down so I don't want to promise but I think it's a good question to actually show that in the paper that between epochs you can have different corruption but inside one epoch using our techniques it doesn't work we have to have it fixed maybe with different techniques but that's a thanks for the question it's something I hadn't actually thought about before but it makes sense and I think we can actually show that it works like this any questions? ah okay any more questions? ah yeah you caught the detail this is just a screenshot of the previous slide with one epoch that I shrinked but you are totally correct that this would be the virtual the virtual Genesis block that is actually determined by the previous epoch but this too would be this means the same you caught the tiny detail I thought nobody would notice the time I just wanted to illustrate that you run the epoch protocol then you start with this epoch Genesis for the next epoch but you are correct I'm happy that you understood that it's the same good any more questions? then I can tell you a little bit about what we are going to do next first determine concrete parameters what might I ask it makes a lot of sense we need to do that for the implementation for example specifically epoch length that's a good one to be determined we have the proofs and the expressions now it's more a matter of plotting graphs and seeing how the functions behave to get the concrete parameters then work with the development team IOHK to come up with a prototype since it's a project that we've been developing inside IOHK for an upcoming cryptocurrency product so we want the prototype to see how it behaves and also it would be good data for the paper to show that how the implementation works in practice but then there are developers that will do that I don't know how to well some other things that came up Tanaka sensei's questions a good one can we prove that this protocol works when the corruption changes when the adversary can corrupt somebody then that user behaves maliciously then he's honest again then malicious again you would be certainly nice to investigate that there's nothing like that done for any cryptocurrency protocol by now it's all considering static corruption also would be interesting to study the case where users join the protocol in the middle of the protocol here we're considering that the users are known in the beginning and then throughout the protocol they stay the same but it would be certainly cool to look into the adaptive no they have to wait for the next step that is correct well, you can do transactions if you jump in the middle of the epoch you can buy coins exactly so that's also a point the tradeoff between longer epoch with a tiny probability of success for the adversary to do a fork and shorter epoch where the changes in stake get reflected quicker I have a question please, please, please you consider that the owner who owned it in the previous epoch it's fixed in that virtual genesis block the transaction who currently has it in the epoch doesn't matter what matters is who had the Satoshi in the previous epoch and that's set in stone in the blockchain for the previous epoch that's the idea yeah but this is an interesting problem the only analysis right now formal analysis that considers people the protocol is in the paper by Raphael Paz and Abishelat and their student that they show security in a synchronous networks then they also consider people joining the protocol even though the corruption set the corruption static they consider an interesting funny model where you have static adversary but then the adversary can spawn its users into the protocol later he can't corrupt one that's honest but he can spawn a new corrupted one inside the protocol the full set of users and corruption is known but they are not assumed to be participating in the protocol from the beginning it's something like that but still it's a complicated case that's why people are still struggling with defining our idea here was to start from the simple case to the simple static adversary and synchronous networks and then progress towards the synchronous networks adaptive adversary composition good thing is one of the next steps is also looking into composition if you looked at the protocol basically what you need to make the protocol tick to make the protocol run is do this and then crash the randomness every epoch if you control the randomness you control follow the Satoshi if I can't give you an arbitrary randomness I know exactly who will be generating blocks in the next epoch so my intuition for proving composition here in the universal composability model if you guys are familiar with that that's a model people use to prove composition is that I can use here the VSS is information theoretical so I can always cheat in the VSS if I'm simulating the protocol in the security proof and I can then combine this with a UC secure commitment scheme that allows me to select a specific randomness and then I can cheat on the randomness in the simulation and I can set the randomness in a way that I control who gets to generate the blocks then I can simulate things perfectly basically for the proof that's intuition but I haven't really the hard thing there is the technicality of UC that in UC you don't have slots you can't just say that you're going to get messages delivered inside the slots you have to deal with time but Juan Gará and some other clever guys came up with a way of modeling this and you see you have a clock functionality that makes time tick so it would be a highly I think the technique itself is kind of straightforward conceptually but it's a highly technical analysis to prove that this thing is composable because you have to deal with these tiny details of time and message delivery but it's probably doable next steps too so there's a lot of work to be done on these things and the delegation of course good if you guys don't have any more questions that's all I wanted to say good thank you thanks took me some googling