 Okay. Thank you. Ethereum 2.0 randomness using a verifiable delay function. So, I'm going to speak about how to build a randomness beacon using a new primitive called a VDF, and then I'm going to explain how to build the VDF, the cryptography that goes behind it, and the hardware, the supporting hardware. There should be some time for questions at the end. The mics are here. Okay. So, let's get started with the randomness beacon. So, we use the randomness two different places. We use it at the consensus layer in the beacon chain, and basically we're doing secure sampling of validators. We have this huge pool of validators, each with 32 ETH, that could be 100,000 or even millions of validators, and we're basically sampling what we call leaders and committees, and that's part of the process of Ethereum 2.0. In addition to using it at the consensus layer, we can also expose it in the shards at the application layer. So, through an opcode, in Ethereum 2.0, you should be able to have totally unbiased randomness as a core primitive in the virtual machine, and so that should be useful for lotteries and gambling and gaming and all sorts of applications. So, what are the goals for the randomness beacon? We want it to be unpredictable, of course. We want it to be unbiasable, and it turns out that's much more difficult to do, and we also want it to be unstoppable, and I'll talk more about that later. So, there's basically two classical families of randomness beacons. There's beacons based on commit reveal, for example, Randall, and these randomness beacons have an attack, which is called the last revealer attack. So, you have an ordered list of participants, and then when you're about to use the randomness, the last participant can either reveal or not reveal and, therefore, revise the randomness. And then you have approaches such as the affinity construction, which is based on threshold cryptography, and these basically require a certain threshold of online participants to create the next random number, and so if you don't have enough online participants, the randomness beacon stalls, and in the case of affinity, that just stops the whole blockchain. So, just to give a bit more context on this last condition, one of the design goals of Ethereum 2.0 is to survive World War III. So, we're assuming that 80% of the nodes could go offline, and we still want the system to run. So, let me briefly explain how Randall works, because we're going to be building upon it. So, in Randall, you have a Randall epoch, about 17 minutes, and that's 128 slots. Each slot is 8 seconds, basically in the beacon chain, and in each slot, you have a beacon proposer. The beacon proposer is invited to create a block and basically reveal a secret that they've committed to in the past. So, the first beacon proposer reveals a secret, which is like local entropy, and it contributes to this pool of entropy through the Randall mix. So, we have the next revealer that's mixed in using the XOR sign. You know, it's okay if some proposers don't show up and don't reveal, we just move on. And then once we want to use the randomness, for example, at the very end of the epoch, at slot 128, this is where the problems start coming up, because the last revealer already knows all the previous actions of the previous proposers, and so they have a choice. They can either stay put and not reveal or reveal the secret, and effectively, they can choose between two random numbers, and they'll choose whichever is most favorable to them, and that opens the door to various attacks. Okay, so, we have this new cryptographic primitive, which is very recent, just a few months old, and it's a VDF. So, the function part of it is very simple. It just means that you have an input, and for every input, you have a unique output. And if you want, you can think of it as a hash function. There's a second parameter, which is the difficulty. So, the difficulty specifies the amount of sequential work that needs to be done in order to compute the output. So, we're talking about inherently sequential computation, which takes time, and this is where the delay part of VDF comes in. And then, we have a second output, which is the proof, and this is where the verifiable part comes in. Basically, once you've done the computation, and you have the output, you can also produce a proof and give the proof to others and convince them that the output corresponds to the input immediately without having to do all this sequential work. Okay, so, this is the gist of the construction. We have two parts. We have the Randall mixing period, which is one epoch, and that produces a biasable Randall mix, and then you feed in the Randall mix into your VDF. The VDF is going to take time to compute at least one epoch of guaranteed delay, and then on the other side, the output is going to be your unbiased or randomness. Okay? So, this is briefly the safety argument. Why does this produce unbiased or randomness? So, if you look at one given epoch, you will have at least one honest proposer, and the reason is because we have 128 blocks, and we have an honest assumption and a liveness assumption, which says that with very high probability, we'll have at least one. And if you look at the last honest proposer, that will be the point after which everything is predictable by the attacker. So, the attacker can try and build various Randall mixes given his local entropy, and you can start feeding it into the VDF as soon as possible. But the VDF is going to give you a guaranteed delay of one epoch, which means that all the outputs will be produced after the end of the Randall epoch, and it will be too late to try and bias the randomness because every action that has been made is now binding, the blockchain has moved forward. Okay, so, in order to get this guaranteed delay, we are making a safety assumption which is rooted in the hardware. And specifically, we want to prevent an attacker, even an attacker with a huge budget, to be able to build specialized hardware which is significantly faster than what can be done on the commodity hardware. So, the speed at which to compute the VDF, the function, is going to depend on the hardware you have, and basically we want the good guys to be not too bad relative to the bad guys. And in particular, we have this AMACS protocol parameter, the maximum speed advantage that an attacker can have. And for example, we can set it to 10. And the strategy that the different foundation is taking is actually to go ahead and build the best ASIC that we can and give it away to the world so that the baseline for the commodity hardware is actually pretty good, so that we can simultaneously have a very conservative AMACS, but at the same time have a reasonably small AMACS. Okay, we also have a liveness assumption. So, we need at least one person in the whole world to be running the commodity hardware. And the strategy of the different foundation here is basically to build thousands of rigs and give them away to the community for free. Give them away to the community, but also beyond that, for example, to third parties. And if at least one of these pieces of hardware stays online, then we're good. Okay, so we have the commodity hardware and we have this AMACS assumption. Now it's very easy to have a guaranteed delay of one epoch. All you need to do is target an evaluation period of AMACS epochs, and an attacker will only be able to shrink that down to nothing less than one epoch. And this is the whole scheme basically, so we have part one, the random mixing process produces biasable entropy. This is taken by the VDF evaluators. We need at least one in the world to do that. They will start the number crunching. That was going to take a bunch of time, about three hours. And then after three hours pops out the unbiased ball randomness, and then we need a one epoch inclusion buffer for the randomness to come back on chain. And again, you know, within one epoch we're assuming that there's at least one honest participant, and that participant will make sure that it's on chain. And what we do is we basically have a recursive construction. So we use the strong randomness, the unbiased ball randomness, to recede the next 128 proposes. And also another thing that we want is we want a new random number at every epoch. We want a reasonably fast generation of these things, and so we're going to have parallel randomness beacons, and they're going to be staggered in this fashion. Okay, so that's it for the randomness beacon. Now let's have a look how do we actually go about instantiating a VDF. So the VDFs, they tend to be built as a basic building block around, and you keep on iterating this round many, many, many times. And so the basic building block that we have is the squaring function. So you take a number, you multiply it by itself. That's it. And then you reduce modulo n, where n is an unfactorizable RSA modulus. So no one knows the factorization of n. And basically, you do multiple squareings. You do t squareings, where t is going to be your time parameter, and the output is going to be x to the 2 to the t. Okay, so let's go through the whole VDF scheme in one slide. Super simple. So the output, as I said, is going to be x to the 2 to the t mod n. And then we want to build the proof. So the proof is going to be based on a challenge response scheme, where a random challenge will be given to the person wanting to build the proof, and then they will build this proof as shown. And you can make it non-interactive with the Fiat-Chameau scheme. And then the verification is just checking this equality. So this equality is very fast to check. It takes about one millisecond on a single core. And it's basically two small explanations and a multiplication. So this is the scheme by Benjamin Wysolowski from June 2018. Extremely nice. But there is one important detail, which is how do you generate the modulus? We need to have an RSA modulus which no one can factor. And so there's various ways to get such a modulus, but the preferred approach that we're looking into is having an RSA ceremony. So this is similar to what Zcash did with the powers of tau. So you have a bunch of participants, for example, 1,000 participants, and they are going to participate in what's called a multiparty computation. And you need just one of them to be honest in order for the output, which is a 2,000-bit RSA modulus, to be unfactorizable by everyone. And just to speed things up, we could have a trustless coordinator in the middle as shown. So the team that is working on the multiparty computation is Ligero. They're experts in MPCs, and they're from a couple of universities. And these are the parameters of the ideal multiparty computation that they're building. So we're looking to have 1,000 participants, which is much bigger than the Zcash, that they only had 88 participants, I believe. We're looking to produce a modulus of size 2,000 bits. It's L-1 maliciously secure, which means that you only need one honest participant to be there. One of the things which might be a bit tricky is it's a synchronous thing. So everyone needs to be online at the same time to participate. The good news is that it's only a one-time thing, a one-time setup, and it shouldn't last too long. It should last about 10 minutes. And part of the reason why it's so short is because they got it down to just 20 rounds of communication. Okay, so now the last piece of the puzzle, the VDF hardware. So right now we're working with universities around the world that specialize in hardware implementation of modular multiplication. And they have various candidate circuits, and some of them are extremely fast. And based on the circuits that they've presented, this is what we believe we can achieve. So we can achieve a latency of two nanoseconds per 2,000-bit modular squaring. This is extremely, extremely fast, much faster than what a CPU or an FPGA could do. We're targeting a fairly advanced process node, 16 nanometer from a TSMC. And the size of the chip, the area, and the power are very reasonable. 20 square millimeters, 7 watts is pretty good. And so we'll be taking these ASICs and putting them in a rig. Each rig would have A max the number of ASICs, so maybe about 10 ASICs, and that will lead to each machine consuming about 100 watts. And the machine, hopefully, should look something like a Mac mini that you just plug in the wall and it just works. Building hardware is expensive. We're talking tens of millions of dollars, especially that we want to give away the rigs completely for free. But I'm very proud to announce that we're making a partnership with Falcoin. So we've agreed on a 50-50 split on the current ongoing research. And if we are going to go through with the whole project, then I think that would be the largest cross-blockchain collaboration ever. We're also inviting more other blockchain projects to come in. So TI is working on VDFs. I know that Tesos is looking to upgrade their randomness. They're more than welcome to come in. Cardano could use VDFs. Algorithm could use VDFs. The more the merrier. We'll have a better ASIC at the end, and every participant will have to pay less for the ASIC. So it's a win-win, and we really encourage a collaboration here. One of the exciting things that we're looking to do to get the fastest possible circuit that we can is to organize an open-source hardware competition. So anyone who knows how to design a hardware circuit will be invited to design latency-optimized modular multiplication circuits, and there will be very large cash prizes for the participants. We're also doing research between now and the competition. So the competition maybe will happen mid-2019. And so right now, we're looking at all the possible ways in which we could squeeze latency to get really good commodity ASIC. So if you have expertise in any of these things on the right, modular multiplication, reduction trees, compressors, FinFETs, please email us. And today, we just released this website, vdfresearch.org, where you'll find tens of links, maybe 30, 40 links, and you'll be able to dig in more into the content. So just to give a little bit of perspective on what we're building here, what we're looking to build, let's compare the VDFs with a traditional proof-of-work. So VDFs offer something rather unique, which is unbiasingly their election, but they're also much less costly than proof-of-work. So in terms of the energy expenditure, it's about 10,000 times less energy than proof-of-work. And in terms of hardware, proof-of-work right now requires about 10 million GPUs for Ethereum, whereas we need only a few thousand for the VDF. And also in terms of protocol subsidies, it's very expensive. You know, all the hodlers need to pay a billion dollars of inflation per year to support the proof-of-work, whereas for VDF, the incentive would be fairly small, about 1,000 times less. Okay, so this is my conclusion slide, and then we can take questions. So if we are going to go through with this project, and I really hope we do, then we'll be basically breaking several world world records. So we'll have the first World War III proof unbiassable randomness, the only construction that we know that has both the unbiassable aspect and the strong liveness users VDFs. We will be organizing the largest multiparty computation ever. The previous record was Zcash. We would be building the first open-source ASIC. Open-source ASICs haven't really been done before, so this is very exciting for me. And also, as I mentioned, we are looking to have the largest cross-blockchain collaboration to actually build this thing as an industry-wide project. Okay, thank you. So we have about 10 minutes of questions. There's a microphone on both sides. Thank you for the talk. So a question about the RSA number generation ceremony. Can you talk a bit more about this, and what is the input from each participant, and how is the resulting number going to be bound to 2048 bits? Okay, so you want to know about the details of the MPC. So this is still kind of a bit of open research and somewhat beyond my field of expertise, but basically every participant has a random number, and then you take the random numbers from every participant and you add them in such a way that no one knows what the addition is, and then you do bi-primarily testing on the result. So basically, in a way that no one knows the details of the secrets, you check that you're basically looking for a number that is the product of two primes. And if it's not the product of two primes, you do that again, and again, and again. So you do many, many rounds until you find the number that is suitable. Hello. Thank you for your talk. Am I right in saying that the problem with DFINITY that you defined was that it fails if nodes go offline? Right. So DFINITY has a two-thirds honest and online assumption. So I made the calculations if roughly 10% to 15% of the honest nodes go offline in some sort of catastrophic situation, or even not so catastrophic situation, then the beacon would stall and the whole blockchain would stall. Okay. So I just want to challenge part of that assumption. I'm not linked to DFINITY or anything like that. You're asking us to trust three aspects here. One, that the signing ceremony won't generate toxic waste. Two, that this centralized hardware will be trustable. And three, that this brand-new set of cryptography from this year is the right thing to use, rather than just trust that 10% of people won't go offline simultaneously. Right. I mean, you can pick whatever trade-off you want. It's true that a pure software... We don't get to pick it. You're picking it. No, at the end of the day, this is a community decision, and we're just making a suggestion here. There is a trade-off. You know, you can either have strong liveness and hardware, or you can have a pure software solution and no strong liveness. A lot of the infrastructure that we're building for FIMC 2.0, actually, all the infrastructure is designed around strong liveness. So that is not something that we want to compromise on. What is totally possible is actually to just stick with RANDAL. RANDAL is a pure software solution with no hardware, but there's actually no loss in having hardware that will upgrade RANDAL. And the reason is that there's two ways that the hardware can fail, or the cryptography can fail. Number one, the RSA modulus is factored, for example, by a quantum computer, and in which case it would take no time for an attacker to compute the VDF. In that case, we fall back on the safety of RANDAL. In the case where all the hardware suddenly goes offline, or is all hacked at the same time, then instead of having a lifeless failure, we also fall back on RANDAL for liveness. So the VDF is a strict upgrade over RANDAL. Yeah. So you mentioned about synchronicity for multi-party computation. Can you expand why you would need a synchronicity ceremony? I don't hear the whole question, but I think you're asking why do we need to have a synchronous MPC? Yeah. Yeah. Simply because with the state of the art of RSA MPCs, we just don't know how to make them asynchronous. The Zcash powers of TAU with asynchronous, and I believe this is more the exception than the rule. So the current state of the art is synchronous, so we're stuck with that. The best we can do is make sure that the duration of the MPC is as small as possible so that we don't waste people's time. What's the fundamental reason for at a high level? Can you expand on that? Yeah. Again, I mean, the MPC is going beyond my domain of expertise. There will be a paper published soon, I believe, by the Liegero team. And actually, that work is based on a paper from Crypto 2018. So I can point to you, if you email me, I'll point to you to the paper. Thank you for your presentation. I have a question about the VDF. So you are using the VDF, which is easy to speed up using ASICs, right? So your VDF is easily speeded up. Have you considered, you know, like doing a competition for VDF, which would be ASIC-resistant? If you like spending 20 million dollars on the ASIC, maybe you can take a million dollars and try to look for different VDFs. Right. So there are different VDF constructions, and there's some VDF which are known as proto-VDFs, where instead of having an exponential gap between the prover and the verifier, you only have a constant gap. And one of the teams, blockchain teams, is called Solana. They're actually going that way. So they're using repeated SHA-256, I believe, as the VDF, and in order to allow for parallel verification, they have these checkpoints. And then they use GPUs for the massively parallel verification. And the assumption there is that, you know, Intel is very good at designing SHA-256 instructions. What I can tell you is that from the little that I know about the hardware from studying for the last few months, I'd be actually very surprised if Intel has an optimal implementation. Like, initially, I was thinking that, you know, modular multiplication would, for us, 2,000-bit modular multiplication would take maybe 10 or 20 nanoseconds. Now we got it down to 2 nanoseconds. And there's these, you know, pretty fancy optimizations, which I don't expect Intel to do necessarily. You know, you have a trade-off between latency, power area, and Intel is trying to find something reasonable. We only want to optimize latency. Hey, Justin, just a quick question about the VDF. So one of the inputs is a difficulty setting. So can you talk just a little bit about how that's calculated? And maybe possibly what the implications are, is there an attack vector there possibly? Could it be manipulated? Right. So the AMAX assumption that I've been talking about, we believe we can have it hold for at least five years. So for at least five years, we won't need any more ASICs. And we won't need a difficulty adjustment scheme. And once we have the length of the Randall epoch, which is probably going to be something like 17 minutes, and we know AMAX, so for example, AMAX equals 10, then we just set difficulty to take 170 minutes on the commodity hardware. So that's all it is. And if we want to move to a more long-term solution where we want to have a dynamic difficulty adjustment where, for example, new hardware enters the market and we want the difficulty to go up organically, we would need to have a difficulty adjustment mechanism there. It does introduce some complexity. So there is some trade-offs there. I mean, I wrote an e-fresearch post on mitigating the main attack, so I'll be welcome. I'm happy to link it to you. Just one other question. Say the VDF somehow goes offline. All of them, does that change the assumptions? Do you have a way to account for the fact that now the randomness is coming from Randall? Yes, so in this slide, we're basically recursively using the randomness to select the next Randall. And if the randomness doesn't come on-chain soon enough, which should not happen, but let's say there's some sort of exceptional condition, then we just use the blue dye as opposed to the red dye. So we fall back on Randall. At the application layer, things are actually better. So in the opcode, you will specify the epoch, and it will return either no randomness, like 0, 0, 0, or it will give you the randomness. And so you can design your application in such a way that it will just retry until it eventually gets the randomness. Okay, thank you. Hello. Is there any sort of incentive for the ones that you are interested in with the VDF ASICs to maintain that they ensure that they are honest other than goodwill and that they're probably highly involved in the scene as well? Right. So I guess we'll make sure to widely distribute the ASICs. And one way to do that is to just give it away for free. There will be in-protocol incentives. So the easiest incentive to implement is to provide a reward for the block proposer who includes the randomness and the proof. We could also directly incentivize the evaluator by giving them a reward. And we do have schemes for that, but there's a trade-off between basically introduces complexity and more burden on the beacon chain. So I think it's reasonable. If we have thousands of nodes of VDF rigs distributed around the world, the foundation will run rigs, exchanges will run rigs, investors will run rigs, enthusiasts will run rigs. And we just need one of them to be online. I think that's not too bad. And the incentivization for the block proposers, that actually incentivizes sophisticated block proposers to run a VDF rig themselves and maybe to overclock it just a little bit so that they'll be slightly before everyone else and they'll get the reward. Thank you. One more question. So you suggested that the VDF rigs would upload or would have their output inside of the block that they propose, correct? That the proposers would have their VDF output inside of what they propose and get rewarded for that because they would submit that. Would that imply that these VDF A6 or rigs are running concurrently with the validators? So anyone can be a VDF evaluator. You don't have to be collateralized. You don't have to be a validator. But in the special case where you are a validator and you want to earn a little bit more money and you are sophisticated, you can run the hardware in parallel and you can try and make it run slightly faster, make it be cool a little bit better. One of the things that would be cool on this question is if we could have one of the rigs in the satellite around the world, at least we have this one node that's online. Thank you. I'm out of time. I'm so sorry. I'm more than happy to speak about this all day long. So please come to me after the talk. Thank you.