 OK, thank you. So this is joint work with Dan Schumow from Microsoft Research. So randomness is essential for cryptography. And as such, a secure PRG underpins the majority of cryptographic applications. Now, at the same time, there's a growing list of real-world PRG failures, which bear out the fact that often when an underlying PRG is broken, the security of the reliant application falls apart with it. And so this makes exploiting a weakness in the underlying PRG a highly attractive target for attackers. And so it's of absolute importance that standardized PRGs, which are going to be very widely used, are designed to be as secure as possible. So the NIST SP890A standard gives three mechanisms for building PRGs, each of which is based on a different primitive. So there's the counter-DRBG, which is based on the block cipher. There's the HMAC-DRBG, which is based on HMAC. And there's a hash-DRBG based on a strong cryptographic hash function. So in terms of the security properties of these generators, of course, we need to make sure that the output is pseudo-random. But in practice, we really want some stronger properties from PRGs, which guarantee that in the event an attacker can compromise a state, then we preserve as much security as is possible. And this is captured by these strongest security notions of forward security or backtracking resistance in the NIST parlance and prediction resistance. And the standard claims that all the constructions possess these stronger security properties. Now, earlier revisions of this standard contain the now-infamous Julie C-DRBG. And perhaps because of the attention that was paid to the Julie C, these other algorithms, despite their widespread deployment, have received surprisingly patchy formal analysis to date. So what we mean by this is that while there have been proofs of the pseudo-randomness of the counter-DRBG and the HMAC-DRBG, as far as we are aware, these stronger security properties, which are claimed in the standard, are still unproven. Now, at the same time, there's a lot of flexibility in the standard in the terms of optional inputs and implementation choices. And these flexibilities are often abstracted away in existing analysis. So in this work, we aim to address these gaps in analysis on two fronts. So as an initial positive result, we prove the forward security of each of the NIST PRGs. And this seems great. But then we find that taking a closer look at flexibility in the standard, the standard allows the PRGs to be used in ways which introduce attack vectors, which aren't covered by forward security. And what we find is that when these attack vectors are taken into account, then security can break down in unexpected ways. And I'm going to give two examples of this in this talk. So the key take-home message is that, while not catastrophically broken, the NIST standard allows these PRGs to be used in ways which may admit vulnerabilities. Okay, so when we think about PRGs as usually defined in the literature, we have some states which is initially constructed from a high entropy seed. And then every time we need pseudo-random bits, we can call an algorithm generate, which takes its input, the state, and returns an updated state and a fixed length pseudo-random output R. Now the NIST PRGs are specified differently in that the standard allows one to request outputs of variable length in each call to generate. And these outputs can be large. The standard allows up to two to the 19 bits to be requested in each call to generate. So this is an example of this flexibility that we're talking about in that, depending on the limits an implementer may set on how much output can be requested in each call, we could have two instantiations of the same generator, which look very different. And at the same time, this is gap between how PRGs look in theory and the specification of the NIST PRGs. So forward security, which is stated as a security goal by the standard, says that if some point in time an attacker can compromise the state of the generator, then all output which has been produced prior to the point of compromise remains pseudo-random, even conditioned on knowledge of that state. So in this work, we prove the forward security of each of the NIST PRGs under the assumption that the initial state is constructed correctly. And this is certainly a positive result, but given how we've just seen that the NIST PRGs don't look like the ones in the literature, which this definition was designed to capture, this raises the question of is forward security covering off all attack vectors against the NIST PRGs and giving us sufficient assurance that they will remain as secure as is possible in the event that the state is compromised. So to answer this question, we need to take a closer look at how output generation works in the NIST PRGs. And what we find is that under the hood of the generator algorithm, there's effectively an internal PRG, which is defined in the usual sense of returning a fixed length output on each invocation. So to respond to these requests for variable length outputs, the generate algorithm iteratively calls its underlying PRG multiple times until sufficiently many blocks have been produced. And then once this is done, it performs a proper state update step, which is distinct from the state updates performed by the underlying PRG. And it's this proper state update which gives forward security. So as an example of this for the counter DRBG, this iterative process corresponds to simply running AES in counter mode with a fixed key. And you'll recall that up to two to 19 bits of output can be generated in each call. So this corresponds to up to two to 12 AES computations with a fixed key in each generate call. So there's a lot of active computation going on under the hood. But then this raises the question of what if there's something like a side channel and the key used in this iterative process is compromised. Now at the same time, there's an efficiency consideration at play here in that these proper state updates slow things down. And this is a similar observation to that made by Dan Bernstein in a blog post about the counter DRBG, which appeared concurrently to our work. And so in light of this, what emerges as an appealing use choice in terms of efficiency is to generate all the output required for your application upfront in a single generate call. So then we'll have some output from this call being used for secret values such as keys, while other output from the same call will be being used for public values like nonces. So we found ourselves wondering how secure is this approach if part of the state could be compromised by an attacker during the output generation process. And you can see that forward security tells us nothing about this because forward security only considers the effect of state compromise after the state has properly updated at the conclusion of a generate call. Whereas for the NISPRGs, the picture really looks more like this where there's potentially large amounts of computation going on on each state during the generation process. And yet this effect of state compromise during output generation has been overlooked up until now. So to address this, we came up with a new informal security model which was inspired by the way it was shown that the Jule C could be exploited in TLS handshakes. So we imagine using our generator to generate multiple blocks of output in a single generate call. And then we suppose the attacker manages to learn partial state information for example by a side channel at some arbitrary point in the call in conjunction with a single block of output which may have been used for a public value such as a nonce. And then we're going to challenge our attacker to compute unseen output. So we're going to ask him to compute both output produced before and after the compromise block within that generate call. And we're also going to see if he can compute all future output. So in an ideal world we'd really like it to be that to compute any unseen future output an attacker should have to compromise the whole state of the generator. And we'd also like it to be that under no circumstances should he ever be able to compute past output. So do the NISPRGs achieve these goals. So we analyzed each of the NISPRGs within this framework and you can see that none of them achieve all of these goals with the counter DRBG faring especially badly. And the high level intuition for this is that you'll recall that these variable length outputs are generated by iterating an underlying PRG. And it turns out that these underlying PRGs are certainly not forward secure and actually really vulnerable if part of the state is compromised to the extent of allowing an attacker to compute unseen output. So this approach of generating all the output up front in a single generate call is rendered really quite insecure by these attacks. And of course the more output generated the greater the damage is going to be. So I'm going to very briefly give another example of one of these flexibilities which introduces less desirable security properties. So there's an option in the standard to feed strings of additional input to the NISPRGs as a way of introducing a bit more entropy into the state during output generation. And crucially the standard allows this additional input to contain secrets. Now there's two variants of the counter DRBG in the standard one of which uses a derivation function and if the derivation functions use additional input is first processed before it's incorporated into the state but if it's not used the additional input will be exorded in directly. And this is an appealing use choice in terms of efficiency but what we show is that this can actually be really insecure in certain circumstances an attacker who learns the state can then easily recover these strings of additional input from subsequent public output which is especially troubling given that the additional input contains secrets. So just to wrap up while we're not saying by any means that these generators are totally broken what we are saying is that the overly flexible standard allows them to be used in ways which may admit vulnerabilities. So our recommendations are that firstly standard should not be overly flexible and any security claims should be proved before algorithms are standardized. Now at the same time all these use cases that are being doubted as being less secure are at the same time desirable in terms of efficiency. And so designing PRGs which achieve an optimal balance between security and efficiency is a really important direction for future work. Now finally on the theoretical side we've seen there's this gap between how PRGs look in theory and how they're specified in practice so making sure that our theoretical models adequately capture real-world PRGs rather than abstracting away all these messy details is a really important step to get a clearer picture of what is possible here. So that's all for me we're going to be putting a paper on E-Print next week for the details and thank you very much for listening. Any very quick questions? Okay let's talk to the speaker again, well done. Cool.