 So, this is a talk about the design that went into the new NetBSD entropy subsystem. That is the part of the system that operates behind the scenes when you try to read from DevViewRandom or similar. And it's all basically in this one diagram. So that's the whole thing. All right. Are we done? No. I'm kidding. This is just a very rough summary. I'm going to go into the background of what entropy is and why it's significant for computing. And then some of the finicky practical considerations that went into why this diagram is shaped the way it is. And then a little bit about the cryptographic choices that I made in the particular primitives that I used for NetBSD's new entropy subsystem. All of the subject of this talk is not yet in a NetBSD release. It will be in NetBSD 10 when that is released. But it is currently under development in NetBSD head. So the background motivation is that computers need unpredictable secrets. Whenever you open a browser window to navigate to your bank, your computer needs to be able to generate some other secrets to talk to the bank's website in a way that nobody on the else on the internet can eavesdrop and forge messages to the bank to transfer your funds somewhere else or things like that. So basic internet protocols like HTTPS and SSH need secrets for various purposes. And operating systems also need ephemeral secrets for writing swap securely so that someone can't just take your hard disk and later on pull the, say, short-term bank secrets from what got swapped out temporarily. So what does unpredictable mean? Well, the parties who were involved in a session like an HTTPS session between your web browser and the bank's server, the secrets are very predictable because we know what they are. What's interesting, though, is the perspective of an adversary who is on the path between these two servers and does not know up to what the secret keys are for the cryptography in HTTPS, but knows something about the system. They know what software you're running. They know what machine you're running on, probably. They know some software versions. They know some information. But we hope that they don't know a certain secret keys. And I'm not going to go into the details of how SSH and HTTPS work. Just take a supremacist that there are some secrets involved in here that parties involved in the protocol know and adversaries, well, we hope that they do not know and cannot figure out what those keys are. So a lot of this is going to be about ensuring that we can, ensuring that the adversary has a limited state of knowledge with incomplete information. So in order to quantify unpredictability, we use the language of probability theory. And I'm going to breeze through these slides, but you can go back and breeze them at your own pace, some basic background probability theory. So probability distribution is a quantitative representation of a state of knowledge. It assigns a probability to each of various possible outcomes, like hands or tails, for a fair coin toss. Of course, you can have a biased coin toss where there are different probabilities, but this is the most basic example for a new trial. You could take a die roll, or perhaps two die rolls, and you can sum the results of the two die rolls to get a number from two through 12. And each of these outcomes will have some probabilities as you would. And this is just an example of probability distributions. So it's a mapping from possible outcomes to the probability of that outcome. Now, when an adversary is involved in this, the adversary wins a prize if they can guess what our secret in HTTPS or SSH is. And they can maybe transfer funds out of your account to another account somewhere in the Cayman Islands. And it's hard to get to the Cayman Islands because it's across the ocean, and you have to swim really far to get there. So we might approach this setting bounds on the probability of the adversary can do something bad, that is, the adversary can win their prize, by considering what's the probability of success for their optimal strategy. Now, we're not going to get into game theory here. There's going to be a very simple analysis. We just look at the probabilities of outcomes and look at the most probable outcome. So with a fair coin toss, the adversary's probability of guessing what your coin was if they didn't actually watch you flip the coin is one-half. No matter what they pick, there's a one-half chance that their guess is correct. With a sum of two die rolls, the adversary can get a much better chance if they always guess seven. That's their strategy of the best probability of success at guessing what the sum of two unknown die rolls is. All the other outcomes have lower probability. So entropy is a numeric summary of this probability distribution or of a process whose outcomes follow the distribution. There are various different kinds of entropy, but the one of interest in cryptography is mid-entropy, which in some sense is just another way to write down the adversary's best chance of success. When you consider the experiment of a fair coin toss or the physical process of fair coin toss or a state of knowledge that does not contain the outcome of a fair coin toss, just that coin was tossed, there's one bit of mid-entropy. To take a die roll, there's about two and a half bits of mid-entropy in the distribution on possible outcomes. Curiously, if you take the mid-entropy of the sum of two die rolls, it's also two and a half bits because even though there are many different possible outcomes, the most probable outcome still has the same probability as a single die roll. Now, computers are usually very predictable. Of course, many of us have experience trying to track down bugs that seem to defy this premise, but largely computers behave very predictably. For the secrets in protocol, we need to make sure the adversary cannot predict them now. So we need to maximize unpredictability in some way, which is difficult when you have a machine that operates on bits and always produces the same output bits when given the same input bits. So we need to have some connection to device drivers. That is to devices outside the CPU's logic, which is doing the computation, doing the reliably deterministic computations. So, for example, we might have a device that has a Geiger-Müller tube, which is pointed in Alfa-Mitter, there's a Geiger counter, and it counts ionizing events. Now, these are very unpredictable in the sense that we have a good physical model from nucleophysics of how ionizing events happen, of patterns in how they happen, which is that the time between two events is distributed by an exponential distribution roughly, and so the number of events within a particular time period is plus on distributed, and there's some finicky details about a dead time for the Geiger-Müller tube after its registered account, after its registered event, and it doesn't register another way immediately after that. So there's some physics engineer goes into this stuff, and it's not just a CPU logic. Or we could have a device driver for a bored human who's flipping coins and typing in the outcomes H, T, H, H, H, and so on. Of course, it's kind of inconvenient to carry around an Alfa-Mitter in your computer. You generally don't want to do that because it's likely to flip some bits in RAM and then you kind of corrupt data. So in practical computer systems, the most common example that we see is jitter between independent clocks. So these days a lot of computers, a lot of systems on chip, a lot of CPUs, a lot of modern x86 CPUs have some kind of design based on ring oscillators where you have two circuits on a die clocked independently. One of them just has a bunch of knock gates to have some propagation delay, but it goes back and circles back on itself to keep on flipping bits back and forth. And then another circuit independently samples what the bit is at some point. And as long as the clocks are independent, there's going to be some jitter between them. There's going to be some thermal noise in the circuit. So you can't, unless you have some, well, ideally you basically can't without some profound knowledge of the quantum physics inside the device predict what the bits are going to be. Then we also have interrupt timings. You can have an interrupt handler that samples the CPU cycle counter. In a sense, in a philosophical sense, it's like a ring oscillator. It's a structurally similar idea. But it's very difficult to confidently assess the entropy of the distribution of samples. Without a very good understanding of how the device is structured, it's hard to know how much jitter there is. There's literature out there on designing ring oscillators circuits and estimating the entropy of the process. But when you have a big complex system, you can sort of hope that it's unpredictable, but it's hard to be confident in that. And in the worst case, you might wind up with hardware peripherals that are actually driven by the same clock as the CPU cycle counter. So there's no entropy whatsoever, even though you might look at the samples and to an untrained eye, it might look like it looks kind of random, but it turns out under the hood, it's actually a determinist number of cycles between interrupts that might vary depending on what software is running, but there's nothing actually going into it from the physical world outside. And so if you know what software is running, you can predict exactly what the timings are going to be. Another thing about physical systems versus cryptography is that physical systems tend to have very non-uniform distributions. The possible outcomes have different probabilities. So like as I mentioned with Geiger counters, they are Poisson distributed, or the durations between them are exponentially distributed. So this is from the model of radioactive decay. And so very small numbers of counts are much more probable than very large numbers of counts for... Very small durations between counts are much more probable than very large durations between counts. Similarly, even with ring oscillators, which are ubiquitous on a lot of systems on chip, you have to be careful with them because they don't give independent uniform random coin tosses. Consecutive samples of a ring oscillator tend to be tightly coupled because it takes some time for the jitter between the two independent clocks to take any effect. Similarly, if you have many ring oscillators in parallel and you start them off at the same time, start them off the same reset signal or whatever, then they're going to be fairly closely coupled, especially also if there's any sort of resonance in the substrate of silicon itself, which might cause you some trouble. So even those aren't completely independent. It's not like you have a fair coin toss at your time. In fact, even honest coin tosses, as some research from Stanford a few years ago in Przedeck's lab turned up, even honest coin tosses, if you make with your fingers, we are not deliberately trying to fix the outcome. Even those have some small biases. Now, in contrast, cryptography tends to want uniform distributions. Once perfect fair coin tosses, or at least something that is not feasibly distinguishable from that. In some cases, even a very small bias like you might have from flipping a coin with your fingers is enough to destroy a cryptosystem like in DSA or ECDSA for signature secrets. There are lattice attacks on those which can exploit very small biases to recover what the secret signing key is, which is very bad. Fortunately, in modern cryptography, we have an abundance of ways to turn a short uniform random 256-bit seed into essentially arbitrarily long streams of output that are, as far as anyone can tell, just as uniform. That is, adversaries have no hope of telling them apart from uniform. So, 256-bits is enough. Once you have that, then that's good enough to do everything you need in cryptography. As long as you have 256-bits that the adversary has no idea about better than fair coin tosses, that's good enough for all the cryptography you already need to do. There's no modern cryptographic justification for the antiquity of entropy depletion, where, in principle, in information theory, the number of bits of entropy of any function of a random variable can't exceed the number of bits that were in that random variable in first place. But in practical terms in cryptography, you only need to list bits of entropy for all the cryptography you need. So, roughly what an operating system does is it hashes enough samples from physical systems together into a short, uniformly distributed seed for cryptography, which you get by reading from W random. So, it takes samples from ring oscillators, samples from interrupt timings, takes samples from other fancy devices, like you have a USB Geiger counter, perhaps, then you can wire that up. And the operating system will take all these not very uniformly distributed things and hash them together, stir them up in a big pot, and spit out a short secret that ideally, an adversary does not know. But it's a little more complicated than that because, suppose you take a bunch of samples, a sequence of samples, S1, S2, S3, let's say it's a group of 30 turbine oscillators that you sample every now and then. Or maybe it's, you know, a count from a Geiger Muller tube or something. Now, each sample in this hypothetical is from a process with fairly low mid entropy, like 32 bits. So, you know, maybe you're getting a count of events over the last second from your Geiger counter. Now, if you try to use 32 bits on its own immediately, as all the inputs to a seed for, say, an HTTPS query, that is small enough that an adversary who sees the HTTPS session can plausibly do a brute force search on what the original seed input was. So, even if you have a collection of physical samples from your Geiger counter or something, and maybe the total entropy of all of those samples together is large enough that an adversary has no hope of predicting it, if you expose one sample at a time, then the adversary can, you're not exposing a sample, but you're exposing just some cryptographic function of that sample, some hash of that sample that involves computing, you know, Tiffy Helman Key and publishing the public part and putting it in a TLS session and so on. The point is the adversary knows what this procedure is. The adversary just doesn't know the one sample, the initial sample, S1. The adversary can then do a brute force search with cost around 2 to 32 computations of the same Tiffy Helman Key generation process and so on to figure out what S1 was. And so using knowledge of the system and knowledge of the output on the wire of this HTTPS query, they can confirm any guests they have and they just have to go through on average 31 guesses. And then if you do another query from you get a new sample, but you've only gotten this new sample, S2 on top of S1, the adversary can do a brute force search again to recover S2 and repeat and so on. And so the point is if you keep trickling out samples with insufficient entropy to prevent a brute force search, then the adversary can just keep up with you and at the end of the day, you actually have no secrets at all from the adversary. Even if you thought you had a good source of entropy that was just going slow enough to slow the capability application. So an operating system needs to avoid exposing samples piecemeal. It needs to group them into batches with enough aggregate attributes from all the sources that the adversary has no hope of achieving anything with a brute force search. Now in order to make sure this happens, you want to gather as many samples from physical processes as you can. And so maybe you want to take a sample of the cycle counter, CPU cycle counter or some other time counter or something to get a sort of simulacrum of ring oscillator in all of your entrant handlers and take just every time something happens, something new happens, look at your watch and enter that into the pool of samples to be hashed together. But hashing the samples together costs some computation. There's some cryptography and so there's some latency in the computation and so in that BSD, we do a couple of things to mitigate that so that the cost don't become prohibitive. One thing is we gather samples into per-CPU pools so that way when you want to take a sample you don't have to take a lock, you don't have to do any atomic operations that trigger inter-processed communication. You can work entirely in CPU private memory that is never tainted by some other CPUs access so it's likely there's a good chance it's cached and even if it's not, you don't have to coordinate with another CPU to fight over that cache. Except we definitely haven't had to risk this so far. We do try to go to some extra effort to make sure that samples get distributed quickly early at boot. But aside from that, samples are entered into per-CPU pools, no locking overhead whatsoever. And we put them into small buffers in per-CPU buffers before doing any cryptographic computations on them. Later it will turn out computations computing the catch-act permutation just like in Inside Shot 3. But the point is cryptographic computation costs about 3,000 average cycles to do a step. So we store samples into per-CPU buffer and during interrupts we never do the cryptographic computation, we just drop any additional samples if the buffer is full. That way we avoid putting introducing latency into interrupt handling. That is a minor caveat with that right now. Some interrupts do block for longer than they should, but that's a technical detail for them. Working out with some fiddly engineering concerns deep inside that DSD that aren't that interesting. So I'm going to skip over that. Now, another thing is you want to gather as many samples as you can but also if you have a lot of applications running that are generating keys you don't want the applications to have to contend over a single global resource to generate keys. It's really enough once you have to use defensive entropy then once you have a seed you can officially turn that into as many different streams that are arbitrarily long as long as you need for keys for applications to use. So we draw a view-rent output from a per-CPU setter and number generator to state for scalability. And in order to seed it we have a global entropy counter that increments every time the entropy substance decides okay it's time for everything to either get seeded or reseeded for whatever reason. It doesn't happen very often but it's rate limited to once a minute if I recall correctly you need to double check that. This enables us to lazily reseed PRNGs that are actually in use. If a PRNG is not currently being used then there's no need to do any computations to feed a seed into it to start it up. And when we're not doing this most of the time on systems with hardware and number generator devices then this is totally parallelized and there's no contention over any shared resources. So there's a problem of course which is if you don't have enough entropy well you need to make a decision what do you do? For a lot of modern big machines like x36 machines over the past decade not all of them most Intel ones some low end Intel CPUs don't have read-rand-read seed. A lot of AMD CPUs that are more than a couple years old don't have read-rand-read seed and so it's mixed back. Big servers are pretty much guaranteed to have it these days. In the ARM world the ARM instruction set had a similar thing to read-rand-read seed added to it but I haven't actually witnessed any ARM v8.5 hardware in the real world. So I developed NETPSD support for the R&DR instruction in QMU. A lot of newer SoCs have hardware RNGs based on ring oscillators so for these machines this is not a concern you always have enough entropy because you have a hardware device that can provide it immediately a boot and then it's done and you're good. But not every machine has these and sometimes there are say virtual machines that don't have them and the virtual machine host doesn't have a RNG device set up exposed to the gas so in some cases you don't have a hardware RNG hardware entropy source at hand so well in NETPSD you can store a seed on disk like in many other systems and NETPSD will automatically update it on boot and shutdown and every day in the daily security script so this way even if the file system is compromised updating it on boot guarantees that unless the adversary can read old versions of the file which is plausible in some cases but if the adversary can't read old versions of the file then the adversary can't go back and find old states that DevRandom was in on prior boots in order to regenerate keys that would live in a lot of the eavesdrop on old TLS sessions or something and on shutdown and daily NETPSD gathers all the entropy that has been collected into per-CPU pools and make sure that it is stored in a batch on the disk for the next boot so if you have a seed if you started a machine out with a seed then NETPSD will maintain that across boots and this is standard practice in many OSes there's some finicky details in how the seed actually gets updated I'm not going to go into those details but it's in the RNG Cuddle command in NETPSD to make sure it gets updated safely but what if you don't have a hardware RNG and you don't have a seed on disk well the traditional answer is whenever an application wants to generate a key you just make it wait I'm sure many of you have seen this message from GPG that you need to bang on the key or lick a monkey in order to get enough entropy now this is partly because this message is annoying partly because GPG doesn't use a very good algorithm for generating the RSA key it repeatedly asks the operating system for more bits when it could just use a an APRG in user land but fine this is also pretty annoying on servers where you're trying to run say generate a key in a script headless and you don't have an operator to intervene to bang on the keyboard like a monkey and it actually doesn't even really make that much sense because historically there was this premise that the operating system and Linux still does this today and NETPSD up to NETPSD9 did this many other options to do as well we just examine the samples that go into the entropy pools and make up an idea of what the entropy underlying process is by examining consecutive differences who literally just compute the difference between two 32-bit samples as an integer and maybe look at the difference of the differences and if if it came back non-zero taking enough layers of consecutive differences then it would say okay great this process must have had some entropy in it now this algorithm is designed without any reference to the actual physical device that produce the samples so it has nothing to do with the physics behind the ring oscillator nothing to do with the physics behind a Geiger counter it's just and it's independent in Linux and in NETPSD9 it was independent of what device it was this is the algorithm applied to all samples irrespective of where they came from but in NETPSD10 and also in the FIPS these days the new standardization changes in FIPS certification so NETPSD9 and now FIPS ask that any estimate of the entropy of that we count up in the system to determine whether there's no entropy that has to be based on how the device actually works so the driver author at the very least in NETPSD the rule these days is the driver author should either have some reference to literature on how the device works or at the very least have a promise from a vendor that this is designed to have a certain amount of entropy in each sample so there are some drivers in NETPSD that have reference to data sheets about there are 120 ring oscillators that we sample independently we sample in parallel and we can get a certain number of samples here but at the very least we drivers that are for devices that are not designed to be unpredictable do not contribute to a systems account of whether you have enough entropy or not so of course the trouble with this in practical terms is that you can't guarantee non-zero entropy and thus stop blocking if you don't have a hardware running that is you don't have a device that was designed to be a plausibly confidently have non-zero entropy for example on certain NETPSD systems I was talking with one of those developers it turns out that the timer interrupt and the CPU cycle counter have exactly the same clock so there's no jitter between them if you try to use a periodic timer interrupt to gather new entropy as is something that various embedded systems would try to do you wouldn't actually win anything against an adversary who knows that is how your system works because the adversary knows what the software is and they could run the same software and it will have the same relation between the CPU cycle counter and the timer interrupts now maybe there's some other sorts of entropy in something about the latency of RAM accesses or something like that but then that's not the timer entropy that's from the RAM instead and so it's hard to give a confident assessment that you have that you have non-zero entropy in a system like this where you don't have a hardware running generator, you don't have a seat on disk and you just have you know, it's a periodic timer interrupt which historically this is often used to make up it makes it a hardware running generator but in some designs it might actually be doing nothing whatsoever for infertibility and then of course if you can't guarantee this well if you require that the key generation blocks until you have enough entropy then a network appliance you just plug into the network your private network not on the internet but your private network with your laptop this appliance might seem like a brick if as the key changes blocks until it has entropy which it doesn't because there's no hardware running generator well, in the case of Raspberry Pi most of them do have onboard RNGs but for a device that doesn't this presents a very serious usability issue so one one approach is to just say well, that's a bad usability issue so we should fabricate estimates of what the entropy is based on interrupt timings and it's yeah, it might be good enough it's probably good enough, right? even if we don't actually have any grounding in confidently making something predictable that is actually what Linux does these days and what Nets used for a long time many other systems do as well but it's also kind of lying to you because it's saying well yeah, we don't know how this we don't know that this has any any way to actually be unpredictable in the face of an adversary we're just going to pretend it is because that's more convenient that way, it's more usable that way and there is a good usability argument for this but it's also kind of dishonest and so last year in FPSD we experimented with introducing the get-random system called from Linux which is slightly different from reading from dev random and I'm not really going to go into details but it blocks sometimes and it doesn't block other times and the experience has been actually fairly negative because what happens is that applications when they need to generate a key it's deep in the belly of a bunch of logic that is about to do something that isn't really directly connected to a human using a system what happens is that it's in the middle of a build process the middle of a Python process that is just trying to import the multiprocessing module and the whole build process hangs and that's it that's the feedback that you get the build is stuck which is not actually relevant, it's not even doing anything on the internet the reason the multiprocessing module hangs that it imports a imports a random module for the reason the multiprocessing module hangs is that it generates a key that it might use if you were to try to use it over the internet but it doesn't necessarily use it over the internet so sometimes there are workarounds like in Python when you import the random module there's a workaround where Python just it specifically asks for a never blocks path but that never blocks path isn't available except in internal to Python the multiprocessing module Python has no option for asking it not to block in this case it's not very helpful for a big stack of different things in a build process that's mostly automated and is nested components that the operator is not really paying attention to it's not part of what they're doing it's not helpful for that to just hang so instead in NetBSD we still use the estimates of the entry of the process use of device drivers that go into the entry pool but we try to use them for notifying an operator of a potential security problem in other ways so first we offer the option it's currently under development being changed to the moment we will offer the option for the operator to furnish a seed when installing it's no hardware under generator and if there isn't enough entropy then the daily security report which goes out to the operator by email which again network compliance may not have email set up but this is a mechanism that we have for alerting operating security problems one of them is that the system doesn't have enough entropy then we will alert them we might put a one-liner in the MOTD with a reference to the entry main page if it's a security problem but we also need to be careful to avoid warning fatigue because with like certificate click-throughs it just gets to be with like certificate click-throughs in self-signed certificates it's not a security problem worse than HTTP but people got trained to ignore warnings because it was such a pain to go through to those so we need to be careful to avoid warning fatigue and this is still under discussion but we might remove what I consider to be the fail to get random experiment which is currently in head so far and instead switch to the never blocking much simpler get entropy from OpenBSD and it looks like the POSIX is like to adopt soon and discussions ongoing this is not a promise this is just a thing that we're working on behind the scenes finally I just want to go over briefly some of the cryptographic choices I made so for the entropy pools which I guess I didn't put the diagram at the end this today so anyway for the entropy pools which gather samples from physical systems we use an algorithm based on Ketchak related to SHA-3 which lets you feed in a sample and fetch a string that has roughly as much entropy as the original the original sample did but looks uniformly distributed this algorithm is convenient because it doesn't have any entropy loss in the sense that you can always if you had all of the inputs and outputs except one you could recover that one so that means that we're never losing any information in a certain sense we do lose information if you discard the original samples you discard the outputs but the crypto doesn't do that itself so the device drivers obviously will try to discard their samples as soon as they've entered into the pool but in principle we don't have to worry about the entropy of the outputs being substantially less than the entropy of the inputs and the security is closer to the security of SHA-3 in the references in the footnote here for generating the output of DEVU random we use the NIST SB 890A hash DRBG with SHA-36 DRBG is NIST's funny term for pseudo-runner number generator it stands for deterministic random bit generator because NIST had to be special in their terminology I picked this a couple of years ago because of the SB 190A constructions it was the simplest, it admitted the simplest security story and it would not invite timing side channel attacks the way that say AES counter DRBG does we used to use counter DRBG with AES until some timing attacks got published that were exhibited on NIST in particular that said I also rewrote NIST's AES code in the kernel last year and to eliminate all the timing attacks on all platforms at some cost and performance on some of them so we could go back to kind of DRBG but it's not really a big deal because mostly scalability is more important than raw bandwidth, raw throughput for queries because mostly applications just need to generate 32 bytes at a time from DEVU random and then they can use that to see to open files PRNG or whatnot now why do we use both the catch up duplex and the hash DRBG the idea was that this would make it easier to approach FIPS these certification-y stuff which I haven't actually done but in general nobody ever got fired for choosing US federal government crypto at least not in the western world maybe in Russia people get fired for doing that but and FIPS at least used to be although the standards are evolving right now less picky about condition components like the entry pool then about the DRBG that actually generates the output that applications see so if we use a kind of unusual but convenient bespoke algorithm for the entry pool and then a very standard NISTy FIPSy thing for the actual output of DEVU random and it varies from the system to use the same DRBG or similar to the SSB800 DRBG for DEVU random output and get random output and that's that's about it. If anyone has any questions I'm going to go back to the diagram the beautiful diagram of how the system is organized I hope I'm still here am I still here? okay good okay so there's one question from chat from Nia two questions the first question is where can I get a USB Geiger counter with documentation so I can write it as a device driver unfortunately I don't actually know where to get a USB Geiger counter but if you find one please let me know because I'm curious I also must admit I have never used Geiger counter myself as a harder random generator it's maybe someday but yes they're kind of finicky because it's tricky you need to actually get the you need to worry about the dead time to underestimate the entropy of the samples and then you have this alpha emitter and you have to handle that carefully like you could use a sample of Plonium 210 but you maybe don't want to have a sample of Plonium 210 around also kind of hard to procure anyway next second question are there reasons other than FIPS that drove me to pick a catch-up-based construction instead of say Fortuna which is another function to put down for random generators used in FreeBSD and Windows so Fortuna is Fortuna is another system for trying to avoid iterative guessing attacks in a certain sense by using an array of different tree pools that are queried at that are combined at different times into the main output I studied the design for a while and I had trouble ascertaining what security it actually provides over simpler systems that don't involve this funny schedule of different PRNGs I'd have to review the design again as I looked at it also Fortuna does not if I recall correctly specify a particular crypto primitive so it is more comparable to the set of per CPU pools than to the specific choice of catch-up-based primitive so Fortuna doesn't avoid contention over the pools I specifically wanted to have make entering and expecting it to be scalable we've had problems in the past with taking samples in network drivers that in order to get high throughput through network drivers we had to just turn off the sampling because it was just way too much overhead and that was when there was we'd have to take a global lock in order to enter the sample which is real bad if you're trying to do multi-Q network traffic spread across multiple CPUs all of it gets serialized by taking these entropy samples with Fortuna because the schedule is determined by where the interrupt the schedule isn't determined by which CPU the sample came in on but the term is something else I think that most Fortuna implementations in FreeBSD and macOS those just use a vanilla hash function I'm not sure about that but that doesn't have the nice property of guaranteeing entropy preservation the way the KACHA construction does next question a comment a device with an avalanche diode easy to build and counter most likely it is another fairly common design that I didn't mention along with ring oscillators I feel like on SOCs I've almost always seen ring oscillators but yeah, you could use an avalanche diode they're fairly well understood there's literature on them and easy to work with you don't have to procure a sample if you're 10 which is a bonus next question are there any special considerations in practice when setting up NAPTST VMs? yes set up the host so that it provides a VertiO RNG device if you do that and if the host has its own good entry pool then the guest will be just as good as the host is if you don't have VertiO RNG then the next best thing would be to put a seed draw seed from either the host or something else to store on the guest file system of course it has to be independent for every guest you create so you don't want to create an image then replicate it across many different machines because then they will have each other's secrets unfortunately not all VMs provide an entropy source that I know of so if you know how to do it with say I seem to recall Amazon EC2 they do not make it easy to get at an entropy source on their arm in their arm guests so if you know how to do that let me know and I would be happy to put it in the entropy man page in that BSD which by the way you can read at I'm putting a link in the chat this is the current entropy man page focused mostly on users rather than on the design and the design is summarized briefly in the somewhat readier R&D.4 man page next question NetGT10 is overdue, is entropy still blocking its release? Well yes unfortunately it still has get run experiment and it's blocking everything and builds are blocked because of it and yeah entropy is not actually the only thing that's blocking the NetGT10 release we're also working on other things like updating DRM graphics and lots of other things as well alright so do we have any more questions anyone would like to ask anyone I don't even know who's here I guess there's a list of people logged into this thing interesting wow that's a lot of people okay let's talk into a void here what I'm doing alright so I think my time is almost up time for my talk is just about up I think actually the room restarts at something like 1825 for recording reasons so we're almost done I would like to thank you on behalf of the program committee and the organizing committee for the interesting talk and I think we'll probably be moving on to the closing session in well just a few minutes so thank you very much Taylor for a very interesting talk okay well thank you and feel free to send me the email if you have any further questions be happy to talk about entropy in other medium so stay healthy folks