 Today I'm going to tell you a little bit about these different methods for constructing random beacons from different cryptographic primitives. This is a heavily focused on my work with co-author Ignatio Cascudo as well as some more recent work with Kastin Baum and Rafał Dawzli and Kiria Takahashi and Anna Napainin. So let's start. First I'm just going to give you a quick review of what randomness beacons are and what we expect from them and then I'm going to survey these different methods for constructing them and then with some considerations about the different disadvantages of each of them. So what is a randomness beacons basically? We want a decentralized protocol that outputs uniformly random and unpredictable and unbiasing values and we also want them to be publicly verifiable and obviously we also want to have some sort of guaranteed output delivery so we don't have to worry about the availability of our random outputs. If we want to illustrate this with a Japanese theme say we have our nice Tokyo Tower right here and then Doremon and Ampaman and the baker want to learn some random values. They can go to the tower and at each point in time they can get a new random value in a way that they all know they are getting the same random value and by trusting the tower they know that this is actually secured. Why do we care about this? We can use this for example to build proof of stake based blockchains in which this is a very important building block. So say we have a POS blockchain with Totoro Porcoroso and Chihiro each having some relative stake so Chihiro is a rich one here and they want to select who generates the next block. We can easily do that by using a random beacon that outputs values that link to each of these coins and then the owner of the respective coin gets to select the transactions in the next block and generate the next block. Again we need all of the properties that I said before in order to be able to prove that such a protocol actually achieves consensus according for example to the GKL model. Now some important parameters that we care about when constructing such randomness beacons are of course the setup. What kind of setup do we need? Do we need a PKI? Do we need a common reference string? Do we need a trusted third party or what? Then the sort of randomness that we get out of this beacon, do we get bias randomness? Do you get something that's uniformly pseudo random? What kind of randomness cleanliness can we get from our beacon protocol? Of course the complexity, we want to know how efficient this is or inefficient in terms of communication and computation. And finally what security guarantees we have? Can we actually show that these protocols will survive composition with other protocols for example blockchain consensus or whatever decentralized application is consuming the random outputs from the random beacons. One of the first approaches for constructing such beacons is so called publicly verifiable secret sharing which involves a dealer who shares certain locally selected random values in such a way that every party can verify that the shares are valid. Later on everyone can review the values they have shared and then do the standard coin tossing trick of exhoring all of the values together or extracting some randomness from that in a more efficient way which is actually the approach we take in some of the best instructions right now. Basically the setup is some PKI and you get something that's uniformly pseudo random really because these constructions are computationally secure, they are not information theoretical but we can use certain techniques to improve the efficiency. The main result of 2017 or 2016 was that by bearish holmakos that had complexity cubed of n where n is the number of parties and then we finally figured out how to get that down to all of n squared in scrape and then in albatross it can actually generate not only one random output but all of n squared random outputs with complexity all of n squared which gives us an amortized complexity per random value that is amortizes to constant. Recently we also improved the concrete complexity here, the constants of albatross in this paper called YOLO-YOSO which is geared towards this you only speak once model of MPC by constructing a better zero-knowledge proof of share validity. Notice that in such schemes you basically need to publish encrypted versions of your shares and then prove in zero-knowledge that those encrypted shares are valid shares of a certain value. The main improvements that we've had in this line of work have always been connected to constructing better methods to do this exact zero-knowledge proofs. The security guarantees you can get with this sort of scheme are as good as you want them, universal composability in the case of albatross, the other follow-up words can also be turned universally composable which means that you can safely combine these protocols with other cryptographic protocols that will consume the randomness. Applications here are basically anything that needs any randomness since we get very clean outputs in the sense that we don't get any bias when using these sort of constructions. Some real-world examples are the first era of Cardano that used the scrape scheme that we published in 2017, although now the improvements in albatross and then in random which we'll cite later and YOLO-YOSO will give us even better performance. Just as a reference here, already in scrape we could run this with tens of thousands of parties. I think that back then using some Haskell reference implementation by the Cardano team, we could run it up to 50,000 parties. With improved constants in albatross and YOLO-YOSO, I believe that could go even wider than that in terms of parties involved in the protocol. So PVSS is interesting. It gives you very clean randomness, but it has a clear cave yet. You need to communicate and compute too much in order to generate any random values. You need at least two rounds of communication where all parties participate or say a majority of parties, one where you publish your publicly verifiable shares, and another round where you publish the value you have shared or where a majority of parties cooperate to reconstruct a value from a party who failed to reveal their shared value. So we want to consider other options that will give us randomness with better efficiency. One such option is verifiable random functions. There are basically these pseudo random functions with attached proof that you have evaluated this pseudo random function on given input, given a secret key that you have, and then people can verify this proof using a public key that you have published already. First time this was proposed was in the algorithm paper as far as I know, and then we did some improvements on that approach in the browse paper, which I'm going to tell you a little bit about. First of all, the setup we get here is a PKI. We need the parties to register their VRF public keys for verification, and you need a random nonce to start the product execution. And there's a caveat here. The adversary can always add some bounded bias to this sort of beacon because basically the adversary can decide not to publish a VRF output adaptively in order to induce some bias in the final random value. The computation and communication complexities are quite nice, so event, because there's basically a noninteractive protocol where everyone publishes a VRF output, and in the end you extract randomness from all of these VRF outputs that have been published by the parties executing a protocol. And the security guarantees you can get from this sort of scheme are, again, universal good possibility, as shown in the Genesis paper, which did a UC analysis of our Huroboros Praos construction. Finally, applications that can deal with some bias can use this sort of dirty randomness, and one such application is actually a POS election scheme. You can deal with some bias in those protocols as long as the bias is bounded, and the other parameters in the consensus protocol are adjusted accordingly. Algorithm and the current era of Cardano run this, Concordium run this, and other blockchain protocols are also running similar schemes. One interesting thing to notice here is that actually for the best analysis in bounding this bias to work, we cannot use a regular VRF. Notice that a regular verifiable random function has its output randomness, pseudo randomness defined in terms of a key pair that has been generated honestly, and in these scenarios the adversary gas generated some key pairs which could induce some bias on the VRF output. In order to counter this issue we need to construct a VRF with an extra property which is that even with maliciously generated keys, you still get a pseudo random output which allows us to get better bounds for the randomness that we extract, for the bias and the randomness we extract from the VRF outputs. Another interesting idea following the path of VRFs is considering basically what DRN does with the threshold VRFs of course, as set up you need some distributed key generation protocol, but then you can get something that is uniformly pseudo random out of this beacon with very low communication and computational complexities, and the security guarantees as far as I know maybe there's some new result that came out as shown in glow were standalone security, although I believe this could probably be made you see secure if you use the right distributed key generation and some variations of the threshold VRF. The caveat here is that of course at some point you're going to run out of entropy, you should have some way of refreshing the entropy that is in the inputs given to this threshold VRF, because basically you're doing a threshold VRF evaluation on a certain input and generating the pseudo random output together with proofs that this is already an actual threshold VRF output, but you should be able to inject periodically some new entropy into that system in order to guarantee that what you get is still pseudo random and that you don't get too much degradation. Most applications that in randomness can use this, although if you need to seed something you would need some cleaner randomness, let's say. Real-world examples are the DRN project, of course, that everyone who's here must be familiar with, so I don't think I need to talk much about that. And then we come to the latest rage, which is this whole time-based primitive idea of using cryptographic primitives that ensure that a certain function can only be evaluated after a certain amount of time or that a certain function, that a certain ciphertext say can only be decrypted after a certain amount of time. You need in order to construct this sort of beacon, a setup that is a CRS consisting of parameters for a verifiable delay function or a time lock puzzle. Those parameters are trap-dorable as we know, so care must be taken in generating this common reference string. But if you manage to generate that and understand the parameters well of this VDS and TLPs, then you can get uniform randomness. It's quite nice with very low complexity and also basically non-interactive randomness generation phases with very interesting corruption thresholds can actually show that if you are in a synchronous network, you can get guaranteed output delivery randomness with a dishonest majority, which is a bit surprising given that this is impossible without assuming this sort of delay functions. Now this can be shown to be universally composable. We did this in this craft paper where it constructed a VDF and a time lock puzzle that are universally composable and then apply the folklore VDF construction and prove that it is actually secure and also with the TLP construction we get a little bit better performance in an optimistic case where parties are actually cooperating in revealing what they heat inside a TLP versus just dropping a TLP and waiting for others to solve. And we do that via a public verifiability property of our TLP construction. A big caveat of these constructions is that they are based on the sequential hardness of certain computational problems which we don't yet understand quite well. For example, iterated squaring. We still don't know very well how the concrete parameters for these problems relate to their average complexity and to the actual physical delay we get from solving an instance of these problems. There is a number of efforts towards solving this issue. For example, the whole VDF Alliance effort in constructing ASICs that solve iterated squaring very fast and then hoping that no one constructs a better ASIC. Still, it would be nice to get better foundations for concrete parameters in this setting. Although it is extremely interesting that you get cheap randomness with under very strenuous adversarial conditions. Finally, I'd like to tell you a little bit about a new approach that we just put an apron that was actually an idea coming from some people in protocol apps and some people in crypto sets which said one might not use some physical delay to derive the delay in time-based cryptography instead of using sequential computation. So the idea here is basically listening a little bit to Einstein and remembering that by special relativity nothing can travel faster than the speed of light. So in whatever kind of communication we do, we have this very basic foundation of physical delay. If we were to use this delay for earthly communications, say you still have the issue of easily tampering with devices and extracting keys that could be used to forge some sort of proof of communication. So what these guys came up with was saying let's go to space. They're actually launching a constellation of satellites for a number of cryptographic applications by leveraging the fact that once you put some satellite up there it's actually quite hard to tamper with the hardware and go and extract keys and so on. Another side effect from putting satellites up there is that communication between base stations on Earth and the crypto satellites takes a while due to the special relativity bound. So what they came and suggested was why don't we use this to make a time lock puzzle or a verifiable delay function just by leveraging the actual physical delay in order to derive the delay lower bounds which are then very well understood. We know nothing can go faster than the speed of light. Turns out it can actually construct say a simple verifiable delay function assuming that all of our satellites have a PKI basically they have a secret key for digital signature scheme and that everyone else verifying a VDF knows the public key. So a toy example for this construction would be basically a base station on Earth beaming up an input for the VDF to the first satellite who signs that input then sends it to the second satellite who signs the input and the previous signature and then to the third satellite that signs this chain of signatures and input all the way to the last satellite that ends up generating a signature on all previous signatures and concatenated with the input and finally beaming all of the signatures from all the previous satellites to a base station. Now you can use, if you say these are for example unique signatures, you can extract the final output from all of these individual signatures and verify that this is indeed an output of this VDF by verifying the signatures themselves. This is obviously stupid because we grow the output of the VDF with the number of satellites involved in this process. So a big part of making this work is actually coming up with a new, improved ordered mode of signature scheme that allows you to compress all of these signatures into one constant size representation while allowing for verifiers to check not only that all of the satellites have signed the message but that they have signed this message in a certain order. So we come up with a new scheme that improves on this notion. There was a first introduced by Sacha Bodeva and others and we can actually use this to then get a VDF using this simple recipe but with constant size proofs and constant bottleneck complexity here in this communication. So there's a bit out there but if you want to believe that the crypto set people are working on deploying this right now, they've already launched 100 satellites but you need more obviously to make this work with meaningful delays. One interesting theoretical attack though is if we remember our teleportation guys, this picture is stole from Claude Krepose website showing Yosha and Bennett here in Odaiba in the teleport station. It's close by to this venue. You can use certain entanglement based attacks to cheat in communication over very large instances. Obviously you cannot break the special relativity lower bound but you're able to send noisy information over long distances and not so much time and this has been used actually to break for example, multi-prover zero knowledge proofs in an attack that was first proposed in 2012 and then improved in different ways. So that should be taken care of at some point. Another final approach I would like to mention here is why not take a hybrid approach and instead of using one beacon going for a combination of multiple beacons each with their trade-offs. So in this random paper we proposed is a three-layered approach where we start with a base layer that is based on PVSS and hence expensive. It takes a while to run because we have the zone of n squared complexity but then in the end we get all of n squared fresh random values that then can be used to periodically reseed a threshold VRF layer that's running much, much faster than the base PVSS layer which in turn can output new randomness to reseed a VRF layer that in turn can run much, much faster. You obviously have a degradation of randomness quality as you go up the mountain but in the end you can choose the layer that outputs randomness and the quality that you need at the speed that you need finding the optimal trade-off for each application. The instantiation it proposes based on the prowess VRF beacon with the malicious key generation resilient VRF then the DRAN GLO approach for the threshold VRF and the new version of our Albatross protocol for the PVSS layer that allows you to gradually release the random output that you produce in a huge batch instead of releasing the whole huge batch at once. Although now I believe that we could instantiate this better using the YOLO YOSO PVSS with the better zero-knowledge proofs of shared validity. One question that I think is interesting in this case of combining several beacons into one and also in the case of using beacons that have some bias such as VRF based beacons is how to optimally extract randomness from this and what is the actual bounce that you get. We don't really know that yet. So that's I think an interesting open question here. Just to summarize before I finish we need to always have in mind that each of these beacon constructions has a different efficiency and security trade-off. We can always do PVSS to get large batches of randomness with bad complexity. We can get cheaper randomness but with some bounded bias using VRS. We can do threshold VRS. They have some sort of a more complicated setup but they will be somewhere in the middle. That's why we put them in layer 2 in that hybrid approach and if you understand well enough the parameters of VDS and TLPs you can get something almost magical by getting uniform randomness under very bad adversarial conditions. Although understanding the parameters is still an important problem here. In my future perspective as I said here is why not combine these different approaches and figure out optimal ways to extract randomness from each of these different kinds of beacons even using their different outputs together in order to obtain something that is optimal for different applications with different trade-offs. So that was it and thanks for attending today. Yeah thanks for talking. I haven't read the Cascade paper but just it's a very specific question the signatures. How does it relate with multi signatures on different message? So you've been able to have a better because basically now you have a way to verify the signature made by different satellites very efficiently on different messages so I still relate this to a multi signatures type of case. How does it relate to two notions? So it's a motor signature on the same message. Multi-signature can be on different message but in your case it was different message because you include the previous signature. No no no no in the actual construction I can go back. In the actual construction using an ordered motor signature then each satellite just signs the input and re-aggregate. So it's a multi signatures basically on the same message. But it's an ordered motor signature because it also allows you to verify the order of the signing. Okay so that's the special property that's the property that's defined on an old paper by Sasha and they I don't think there was much work on that on that notion after it was defined and the first construction came out because the first construction is actually quite efficient but we managed to get a version that does not need bearings the original construction uses bearings we wanted to do to the constraint resources here we wanted to do something from delog style assumptions with no bearings and then the extra properties that you get this order verification apart from the motor signature aggregation. Probably yeah we're just like trying to get as standard as we could in terms of assumptions and as simple because of the constraint resources you can just throw I think the original idea from the crypto set and protocol labs guys was to do a basically as an arc computation in each satellite we're like no let's back to the basics something very simple that you can actually implement in a you know bad little arm cortex but yeah. Okay thank you. So the main question I have is you mentioned having to periodically reseed the entropy although realistically like what timeline which you need to like actually reseed entropy from a. So next one question that's one of the things I think we need to that needs more more work on figuring out exactly when to reseed and what the degradation is in more concrete terms I mean we know that we cannot keep going forever without reseeding his entropy right like for that's simply impossible but when does it degrade enough that's an excellent question what are the actual concrete bounds I don't know I don't think anyone has done it done that that work yet. And actually about the receding here sir about the receding so for the run we had the issue that if we were to reseed it would change the public key you know it would be very nice to have a stable public key or we could still receive the research secret but that's yeah that is not something we know to do I think you can receive the input to the threshold VRF right instead of just evaluating the threshold VRF on the previous output you throw that away at some point and you take an output of another beacon that is giving you uniform randomness and restart from that. Okay but then it doesn't allow or it doesn't provide you with forward secrecy right like if you're if you have a past threshold amount of nodes that get compromised the whole network will still get compromised. Not necessarily I mean depends on how you realize the underlying beacon for this if you make that forward secure in a way or or proactively secure you can resist this this kind of attack. Wonderful thank you very much. Thank you.