 I need to start by saying I'm at NIST as a contractor from Strativia, thanks Jolin for inviting. I also wanna say that this is an impromptu presentation. The invitation came, I don't know, yesterday or two days ago and you know. So that's my excuse in case I kind of go over time a little bit. So this presentation will be somewhat different from the previous ones in the sense that I won't go into, the goal is not to go into many technicalities, although I'll mention a few technical details, but it's more to present the NIST project on randomness speakers in some of our perspectives. Okay, so let me start with an introduction. So what NIST sees as a randomness speaker since around 2011 as it tried to define what it wanted to promote as public randomness for public good, is that it should be a service that promotes, that produces timed outputs, so they are well defined in time, of fresh public randomness, so a lot of keywords here. So at a high level description, if you look at these words in blue, these are some properties that we want, so we want some periodicity to produce what we call pulses, like the streams of information that contain some randomness, in the case of the NIST beacon implementation, these are once per minute. Each pulse needs to have a certain amount of randomness, which the NIST beacon, for example, has 512 bits of. Each pulse needs to be indexed so that we know exactly what we're talking about, there's no ambiguity. It's indexed in terms of timestamps and it needs to be signed as well. Any past pulse needs to be publicly accessible forever. And also the sequence of pulses should form a hash chain so that somehow we can keep a well-defined history and sequence. Now what is this good for and maybe not good for? Well, broadly speaking, we think of a main application is to enable public auditability of randomized processes. So whenever randomization needs to be used but later needs to be verified, it's nice to have a service that allows that to be done. It allows for coordination between many parties and it also allows to prove that something has happened after a certain amount of time. If you trust that a particular pulse did occur in the time that it shows, that it's timestamped for. And we always like to give the little disclaimer that this is not good for selecting public keys. Even though the randomness is supposed to be the best quality possible, it should only be used for certain use cases. This would typically be a slide like to be maybe in the end of the presentation but because I'm kind of afraid of not getting there, I just thought of putting it here. Also to give a note that I think actually working on applications of public randomness is actually something that would be very nice for this community overall to work on because we're not there yet, I believe. But anyway, here are some examples of conceivable applications that have been thought of across time. So for example, for clinical trials, somebody makes a study, you need to have some randomization in it to make sure the study is good. Then you have super good results or maybe bad results. People can ask, did you really use randomness or did you kind of cook the results in your favor? So if you have public verifiable randomness, that can be used for this. We've thought of applications of legal metrology where for example, certain instruments need to be verified and using public randomness can basically insert a verifiable timestamp in applications that need randomness. Financial audits, for example, this is an application suggested by the Chilean beacon team whereby in Chile supposedly there's a process whereby public officials need to be selected to be audited. And if it happens that you select people that somehow it can be sensitive, the people you are selecting, right? So it's good to show that this is being properly done. Imagine for example, in a legal system you want to select judges to be in particular court cases. You could use a public randomness beacon for that. And then many applications of quality control to basically select what are the samples you should audit and things like that. Some historical notes about the project that missed. By the way, I've been at NIST since 2017. So a lot of this is prior to my arrival there although I visited in 2012. So in 2011 was the start of the beacon project basically based on the idea that public randomness should be a resource for the public good. In 2012 there was like a big internal grant that allowed NIST to produce, to have this project for implementing a NIST beacon. And also to do research in physics to actually develop quantum random number generators. Between 2013 and 18 there was a beacon service dubbed version 1.0. 2015 there was a nice interesting research result by the physical measurements lab that validated Bell loophole free inequalities. Kind of has to do with causality. It's interesting with respect to randomness. In 2018 we kind of started a new phase of the project. Also around that time the physics measurement labs actually developed an in-house quantum random number generator which today is implemented in the NIST beacon. And until present we have version 2.0 online. And since 2019 there's a reference for randomness beacon which basically is the main topic of today's talk. And we're hoping this year, this has been in draft version for quite a while but this year we should have the final version of this draft published, so no longer draft and also publish the open source code of the NIST beacon. I also want to mention that in 2019 we decided to be more clear about different tracks of the project so that the project is not confused just with the NIST beacon. So the NIST beacon, the NIST randomness beacon is an implementation of this reference. That's what CRS track B, but as track A we want to promote a reference for randomness beacon that other beacons can use. In particular, by the way, I actually wanted to mention this in the cover slide, also an aspect of this presentation that is different from the others is that we're not here presenting a decentralized system. We're actually presenting a randomness beacon that is centralized. But the vision is that this is used, oops. But the vision is that this is used in an ecosystem with many beacons and so that's, you know, it's like an ad hoc decentralization. So we want to, as track C, we want to promote deployment of various beacons by multiple independent organizations and currently there's one in the U.S. which is at NIST, there's one in Brazil, by in Metro, and there's one in Chile. We would like to promote applications of beacon-issued randomness. A lot needs to be done here. And we also support complementary initiatives about trusted randomness. Let me just give an example that I think is really interesting. You can use quantum supremacy to enable certified randomness which is really a very counterintuitive concept. So you can have randomness that you can prove that has been freshly sampled and you can base that on quantum supremacy. As long as you can prove, if you have a little seed that you know, if you have a little seed, you can use it in a quantum computation such that what comes out of that, you can prove that it has been sampled after you had that seed. Really nice concept. Maybe someday we'll have a beacon with this type of publicly certifiable randomness. Okay, I'm already spending too much time here. Okay, beacon reference. So there's a report. You can check it online. It defines a format for the pulses, how a beacon should operate internally, how you can use the fields of the pulse to have a proper use of randomness and it has some security considerations and the final version will be published this year. How does this work? Well, there's a beacon, what we call a beacon app that has all the internals inside and then timely produces pulses. It posts them to a database and then users whenever they want to go there and they fetch pulses, the latest one or old ones. How do they do that? Well, there's a REST interface where you can basically use a URL. You can use different URLs to get different elements, you know, past pulses or the current pulse or the certificate or something like that. All of this is explained in the reference document. Here's, let me just, I guess, point out a few items here. So does this have any laser? I guess not. Just to point out a few aspects. So oh, cool, I have this here. So the reference for randomness beacons specifies that the local randomness for each pulse needs to include at least randomness from two RNGs. So there's one that is local to the beacon app. Imagine this basically as a computer, which also has a clock. We recommend that a second RNG should be outside and protected. So not within reach of tampering by the beacon app itself if it were to be hacked. The signing mechanism should also be protected. You can use multiple RNGs. You can use external entropy. This is an interesting aspect. This external entropy is not for the purpose of improving the entropy of a pulse, but it's more to provide an assurance of freshness. So every time you put in external entropy that can be verified in the external world, you know that it's impossible that whatever randomness comes out of it after it had been pre-computed by even a malicious beacon. That's because we have chained randomness. Yeah, that's pretty much it. And then of course we need to have a time server. And then when we make security considerations, there's a number of things we want to consider. What happens if this part is malicious, but this one is protected? What happens if the time server is malicious? What happens if the signature key is compromised, et cetera? Okay, what is a pulse exactly? Well, a pulse has many items and in the interest of time I can't explain them. Oh, let me just mention the essential parts of randomness. There's actually two fields that we call random something. And there's what we call a local random value that is sampled, you know, it's the hash of at least the outputs of two RNGs. And it's pre-committed one minute in advance of being released. So it's pre-committed in a pulse before the actual value is released in the next pulse. The reason for doing this, this is a feature of version two as compared to version one, is that this is one of the features that allows to combine randomness from different beacons. Essentially you can use this to do a kind of typical coin flipping protocol. You pick several beacons, you get their pre-commitments. And once you have all of them, you can safely use the randomness that they reveal after the fact because none of the, at least any honest beacon could have not been, any malicious beacon could not have waited to know what's the randomness of the honest beacon because we're doing like this two-stage process. Then we actually have the actual random output value which is a hash of all other fields that include the signature, timestamps, indices, et cetera. And it's fresh at time of release. Actually it's fresh within seconds because even though the random local value that is going to show is from a minute before, it also hashes the pre-commitment of the next local random value which was just selected. So this is fresh. And this is the actual randomness that should be used by applications. Let's see. Could you tell me how much time I have just to see what should I skip? Okay, perfect. So how does this work more or less with a lot of details here hidden in the ellipses? So we have a local random value that is the hash of various, what we call raw random values, either at least two RNGs but possibly more. Then you pre-commit it. It's like the CI you have here. So on pulse I you have a pre-commitment of what you're going to reveal in the next pulse as local random value. And then the actual random output in a pulse I is the hash of everything that is over here. And then you have this relation. So there, oh yeah, something also relevant from the perspective of hash chaining is that the random output value of a pulse is actually then also included in the next pulse as in the field the output of the previous pulse. And this is what allows us to have some chaining. Yeah, again there's this little relation here as well. Yeah, so this pretty much gives you a little insight to how the random values appear in pulses. Here's a little snippet of a pulse. As I mentioned, each pulse is indexed with an index with a version. There's a URI, there's the time. There's here are the two random values that I just spoke about. Here's the signature value, the pre-commitment. And here is the output value of the previous pulse. And then actually we have an output value of the previous hour, of the previous month, of the previous year to actually allow what we call a skip list, which is an efficient chaining of pulses. Okay, some features of version two. This is what in a bigger presentation each of these elements would be a slide. So a skip list is what I just mentioned. You can actually check correct chaining between pulses. By correct we mean that there's only a possible, there's a single possible history between two pulses. You can use an efficient skip list, even if they're separated millions of pulses apart, because you can go the first of the hour, the first of the month, the first of the year. Then you go one year at a time and then you connect back to the other pulse. Combining beacons, as I mentioned, you can do coin flipping, kind of a coin flipping protocol using the pre-commitments of various pulses. The reference specifies various timing requirements, some of which we call promises. A pulse, the beacon operator promises that a pulse will not be released before its timestamp. It will not be generated less than a minute before its timestamp and things like that. There's a beacon interface in the reference that specifies how you can get all the information from the beacon. This is actually one of the things that has led the reference to take its time to be published. We're just finalizing that. External values, this is something interesting maybe from the perspective of combining various beacon. So version two of this reference has a field to incorporate external values. As I mentioned in a previous figure, this allows to ensure freshness. So we can conceivably insert in this field, let's say, every hour or every day or could also be every minute, but let's say every day, we could actually, let's say, input a random value from the RAND. And that would show that even if the beacon is completely, if this beacon is completely broken, it could have not pre-computed in advance things beyond a day, right? We can kind of imagine here a cross relation between many beacons that would ensure this to various beacons. And then we recognize that future guidance is needed for some aspects of how to use a beacon, how to combine external values, et cetera. Okay, skip please. I'm gonna skip the example, but this was an example that basically shows, instead of going a million consecutive pulses, you can kind of go step by step and just in a hundred and few pulses, you can combine it into, in a hundred or so steps, you can combine it into pulses, distance to part. And as conclusion, just as final remarks, so the beacon project at NIST really arose out of believing that there's a great potential for public randomness to serve as a public auditability, such as for public auditability of randomized processes. The reference that NIST developed introduced new features for better interoperability, security, and efficiency. And its vision is that there is an international ecosystem of randomness beacon. So even though it's a centralized proposal, it's for use within an ecosystem. Further guidance and promotion is needed, and that's something to be done across time. External values are not yet used in the NIST randomness beacon, but as I mentioned, this is conceivable to be integrated with other randomness beacons. I think that's an interesting community conversation to have. And stay tuned for the soon to come reference. Thank you very much for your time. Sorry, one thing I did not get it. Is it publicly verifiable or like? We need to ask what is publicly verifiable, right? Yes, so you can verify that. So the randomness comes from RNGs, right? So it's not as in the RAND where you know, okay, this comes directly from a pseudo random transformation. What you can publicly verify, you can verify all the hash chaining between pulses, and you can always check that the output value, the random output value that is proposed to be used by an application is indeed the hash of everything that is before and it has been signed by a private key. Right, so the first one is like kind of how to trust it. Come again? The first one to start with the hash chain, right? So it's a sequence of hash chain, but there is a initial, I don't know, genesis or something and that can be arbitrarily generated. But as can all pulses. So this is a substantial difference with the RAND, right? So the RAND is effectively not, it's actually deterministic, right? But it's unpredictable. Here the goal is that it's actually random every time. And for example, we use a quantum random number generator, et cetera. So there is that degree of trust that you need to have in a bit. So you are using that there is no verifiability of the fact that these things is being used. That's what I'm asking. Yes, yes. There is no verifiability, it's a kind of trust. Yes, as in others as well, right? Because there's no verifiability of a secret key. Sure, sure, sure. For example, just for the purpose of the secret key, in each pulse you actually implement a PRF in a sense. I mean, informally speaking, because you are effectively signing something with a key that is supposed to be. Signatory you can verify, sure. Okay, thank you. You're welcome, thank you. So you mentioned you had a draft for a spec and everything. So is it really specifying every step of the generation of the beacons? Or could we say, oh, we can transform D-RUN into a compatible beacons by producing, by using D-RUN as a PRNG to the beacons and then just complying with the spec? So that's a very good question and thank you for asking because interoperability I think is a very interesting concept in this domain. It depends at which level you're talking about interoperability. So the reference does specify a number of operations. So effectively the output random value is a hash of a well-specified sequence of fields and things like that, right? And I don't think that let's say D-RUN has those fields in those manners. But maybe that's not an issue because what matters more is like, okay, what is the actual output value? How would you, is it interoperable that we could use D-RUN's external output value in our external value to somehow have that interoperability? A little more concretely, something that I think could be interesting to talk about in terms of community interoperability would be how do we suggest, how do we provide guidance for a user that doesn't wanna trust D-RUN, doesn't wanna trust any of these beacons but is willing to trust the combination of them? What is the kind of guidance that we give to how they should combine things, right? So, let's say, first thought could be insecure but say, okay, no, just collect the randomness from everybody and hash it, right? But then, oh, okay, but what if that malicious party knows that's what you're doing, so they're gonna wait for all the others to reveal their randomness and then they're gonna produce their own, right? So they could bias the randomness. We have this in this reference, the pre-commitment exactly to already facilitate that. I don't know if, I don't think D-RUN has that. But we could think of how to do it, yeah. Final one in the interest of time. Very quickly, it's all very nice, but why are you doing this? Like, what's the interest of NIST of having this randomness begin? Yeah, so this started in 2011. Actually, it would be nice if I could open the, we have on the website a little kind of historical note and I don't wanna say the wrong name of, it was suggested to NIST in 2011 that it would be a good resource for cryptography to have public randomness. And so NIST as the standardization institute, at least in the US, would be an appropriate agency to kind of suggest that and promote that. So I would say that honestly, I think the most direct answer is that there is a vision that public randomness is a public utility, can be used for the public good. And so this was a project that started as a way of promoting development of randomness beacons. And it has done so at this stage by promoting a reference, it's not necessarily the approach that D-RUN follows, for example, and I think overall that's good, it's good that there's a diversity of things. That's it, but does that answer the question? It's a vision of public good, of being useful for the public good. Wonderful, thank you very much. Thank you very much for your attention.