 OK, I guess I'll talk about how to generate randomness in a more certifiable manner. OK, so this is joint work with my former student, Thomas Wydic, who is now at MIT. OK, so what's the main question? So we are given some source of randomness, and we want to certify that the output is truly random. OK, meaning it's independent, identically distributed. OK, so if these are physical dice, then it's clear what you do, right? You collect samples, you check that they are uniform. But the problem I really want to talk about is where these dice are sort of figurative dice. These are some sort of physical random number generator. So out of which you want to generate, say, 10,000 random bits. And this device has memory, so it's not like these dice where you toss them many times, and the dice don't remember what you tossed before. So now how do you verify that the output is random? OK, so here's a model for it. So you're given some black box. You don't know what's inside it. All you get to see is this 10,000-bit output. And because 2 to the 10,000 is so large, let's just abstract things away and say that you only get to use this device once. So you get as output one 10,000-bit number. And now you want to certify either that it's random or it's not. So that's your task. Let me spell out the rules of the game a little more carefully. So I said it's a black box. But let's say that you're even allowed to say what should be in this black box. So you can specify exactly what goes into the black box, except that the person who made this black box may not listen to you. So what's inside the black box is either what you said or it's something completely different. You don't know. OK, you use the box only once. And now you want to test the output. And the two conditions you need to satisfy are that if the box was manufactured according to your specifications, then whatever testing you do, the output should pass that test with very high probability. Moreover, if the output passes your test, then no matter what's in the box, the output should be statistically close to random. So these are the two conditions we'll ask for. So what I'm speaking about is sort of a philosophical question. Actually, if you want to look at random generators, they go back a long way. So in the very first commercially available general purpose computer, I think this was back in 1951, was it? So Alan Turing, he was involved in trying to build a physical random number generator. And ultimately gave up. And so this first computer did not have a random number generator. And I don't know if any of them since then have had one. But they recently, in the last few months, Intel announced that they have a new processor for which which will actually generate random numbers, true random numbers. In a feature known as IV bridge, whatever that means. Okay, so why do we want to do all this? Well, I guess you all know random bit sequences are used all over the place. But the requirements for these random number generators, the random sequences depend on the application. So in cryptography, you care about not just randomness, but also privacy, or same for game theoretic protocols. For physical simulations, you don't care if the random numbers are secret. You just care that they are random enough, etc. Right, for many of these applications, pseudo random number generators are sufficient. But there are security issues and once you use a pseudo random generator, of course you prove that it has these strong security properties. If you use a cryptographic one. But then your conditions on the seed become even more stringent because now it's not clear how much correlations in the seed are going to hurt you if you expand out the randomness. So again, true random number generators would be ideal, but they suffer from various problems, including the, so there are problems just finding good sources of randomness that you can exploit not just once, but repeatedly over time. And the random number generator has to work over many, many conditions. And really, the underlying at all there's this philosophical question, which is what I outlined, which is how do you test, even if you built the perfect random number generator, how do you test whether it's good? So that's what we, that's the, so really this talk is focused on the philosophical question, can we or can we not test randomness? Not on the, not on the practical versions. Okay, so there was a computer science approach to this problem that was taken starting in the mid 80s. And this approach sort of took this view of trying to find sort of conceding that we cannot really test for randomness. But, but instead, what we, what we could do is assume that we are given a device which was not perfect. So somebody built a device and they guarantee us that there's some randomness in the output. Whatever that means, right? And now the question is how well, how best can you use this through algorithmic means. So, so the, the way this was modeled by say, was by saying that the output is not independent on bias bits, but it's highly correlated, low quality randomness. And moreover, it's controlled by an adversary, right? So the adversary sort of, you know, is, is, is told the minimal conditions that we require from the, from this, from this box. Meaning the amount of randomness that needs to be output. And subject to that, the adversary can do the worst that they can to thwart your algorithm. Okay, so now there are two ways you can use these low quality sources of randomness. One way is to assume that you have multiple, say, two or more such, such devices. And that you're able to physically isolate them from each other. So they don't talk to each other. And now the question is given the outputs of two such devices. Can you, can you find and find a way of combining these outputs to efficiently create a string which is very close to uniformly random? So that's, that's one way you can use this, these, these devices. The other way is we are what, what are called extractors. So you, you take the output of this box, you add in log n. So if there are n bits of output of this box, you add in log n truly random bits. Whose source we, we are agnostic about. And then you massage these two together to get, get an output, which you again prove is, is extremely close to random in total variation distance. Okay, so those were the, the, the two approaches. So now, even though, you know, I don't know if this argument was made formally, but, but clearly it seemed like, you know, the, the extractor seemed like the end of the road, right? Because what else can you do, right? You, you've got to assume some randomness, you know. So this was the, you can, you can imagine everybody who worked in this field went through this mental process sort of saying, you know, you have to assume some randomness in the output of the source. And extractors make only minimal assumptions about the min entropy of the source. They assume log n truly random bits. You know, this amount seems essential. So, you know, maybe these are, these are the best results you can hope for. So it turns out that you can do, you can get past this, these seeming barriers using quantum mechanics. So, you know, okay, so you, you, you know, one would imagine that randomness and quantum mechanics, they are, they are really connected in a very fundamental way. So if you want to generate randomness, you should use, you should use quantum mechanics. So that's not the issue here. The issue is not, how do you generate randomness? It's, it's the, the question is how do you certify that what you've generated is truly random, right, without assuming what went into the box? And so it's a bit of a surprise that, that you can actually more than a bit of a surprise that you can get past these barriers to in a quantum world. Okay, so let me give you a little bit of a background about the kind of quantum phenomena we'll, we'll, we'll need to think about for this, for this purpose. So the, the phenomenon that we, we have to, we'll exploit is called quantum entanglement. So quantum entanglement was initially, you know, the, the, the, the first people to think about it were Einstein, Pardalski and Rosen in an attempt to show that quantum mechanics was, you know, was an incomplete theory. Right, so I'm sure you all know that Einstein did not, didn't really like quantum mechanics. It didn't sit well with him. There's this famous quote of Einstein's about how God doesn't play dice with the universe. So, so quantum entanglement was, and this EPR paradox was what was invented by, by, by EPR to, to try to show that there are holes in quantum mechanics, right? It's incomplete. So quantum entanglement is what Einstein called spooky action at a distance. Where he said it in German, where it sounds much more pithy and but, but okay, that's what it meant. And it was actually a Schrodinger who, who sort of distilled the notion of quantum entanglement from, from this paper. And then he made this really interesting statement for that time, you know, very forward looking where he said, I would not call entanglement one, but rather the characteristic trait of quantum mechanics. The one that enforces its entire departure from classical lines of thought. So this is really quite remarkable for that time. Okay, so now in, in one other very interesting thing, you know, so these papers have both came about in 1935. And it was, it wasn't until about 30 years later that John Bambell realized that in fact, there was a consequence to entanglement. You know, that, that, that instead of doing thought experiments like EPR did, you could use this as a basis of a real experiment, that there was actually a consequence to entanglement, which distinguished it from the classical, from, from classical correlations. That there were these non-classical correlations that entanglement gives rise to. And therefore that, that, that, that quantum mechanics is incompatible with this local realism view of, view that Einstein was trying to promote, right? So there was, there's actually an experiment you can do which tells you one way or the other. This experiment has been done many times with, with increasing accuracy and, and every time it's been done, quantum mechanics seems to come out. Okay, so, so, okay, so, so what, what I'll describe to you is this bell experiment. But in a, in a more modern form, which is much cleaner. It's called a CHSH game by, after it's inventors. So the game is this. Alice and Bob, you know, are given input bits X and Y. And X and Y are chosen uniformly at random. They are asked to output bits A and B. And what they're trying to do is maximize the probability that X times Y equal to AX or B, or the sum of A and B mod 2. All right, so, now if you, if you think about it for, for a minute or two. You can easily convince yourself that if Alice and Bob were classical, the best thing they could do is just output zeros. Because three quarters of the, of the times X times Y is zero. And they'll agree with that. And there's, there's just no helping the one quarter. No matter what you do, you can only make things worse by trying harder. And on the other hand, if Alice and Bob shared entangled qubits, then they can achieve a success probability of cosine squared pi by 8. Which is, which is about 0.85. Okay, so this, this is, this is really the, the, you know, this is essentially what's called the Bell inequality. The, you know, this fact that classically you can't do, do better than 0.75. But classically, you know, more than just classically, you know, any local theory of, of a certain kind, you can't do better than 0.75. Quantumly, this is what quantum mechanics predicts and it's much larger. Okay, so since this will play a bit of a role, let me, let me actually describe how this, how this goes. So, by, by saying that Alice and Bob share entanglement, here's what I mean. So, so, you know, sort of the canonical entangled state of two qubits. Is this maximally entangled state, the Bell basis state? It's, you know, you have your two qubits in the state. Superposition of 0, 0 and 1, 1 with equal amplitudes. And now this state has the property that if you, if you measure the first qubit, then you see 0, 1 with equal probability. If you see, measure the second qubit, you see 0 and 1 with equal probability. And this doesn't have to, you know, none of this has anything, the fact that they are in this entangled state has nothing to do with the proximity of the qubit. So these qubits could be very far apart, they could still be entangled. Right? Okay, so now, the question is what happens if, if, if these qubits are very far apart? You measure the first one. And then before light has time to reach the second one, you measure the second one. What, what would happen? Well, so, well, first of all, what would happen if they are next to each other? Well, you know, so if you measure the first one and it's a 0, turns out the second one will also be a 0. Same for the, you know, for, for if it was a 1. The point is that this, this happens no matter how far apart the qubits are. And this is something that, that bothered Einstein greatly because it seemed to say that, you know, the, the signaling faster than light. Well, actually what, you know, here, here's the actual phenomenon that's even more disturbing about these qubits. You know, so far you could say, well, you know, who's to say that, you know, who's to say that this description was correct, that the state was 0, 0 plus 1, 1. It may be, maybe the state was, you know, with probability half it was 0, 0 and probability half it was 1, 1, right? There's no, nothing, nothing that strange about that. But now, okay, so, so, you know, so, 0 and 1, these are orthogonal states in a two dimensional complex vector space, right? That's what, that's what quantum states are, they are super positions or linear super positions, which can be written as vectors, right? And so, so now you could, you could choose a different basis, a different orthogonal basis, a rotated basis, UV, right? And, and that's a perfectly good basis, so you could try to write out the state's psi in this UV basis. It turns out, no matter which orthogonal basis you pick, the state looks like equals superposition of u, u and dv, okay? Meaning that if Alice was to, were to measure her qubit in the 0, 1 basis. Well, then she'll see 0 and 1 with equal probability. Sorry, let's, let's back up. So, so meaning if, if Alice were to measure in the UV basis and Bob were to measure in the UV basis, they'd always get the same outcome. Right? Now, okay, so, so here's another fact. Here's, here's something you can think about. Let's, let's, let's just back up a little bit and talk about this a little more. So it turns out that these, you know, there's 0, 1 basis in the UV basis. So especially if you take theta equal to 45 degrees, right? These are, for, for a qubit, these correspond to something like position and momentum, right? There's an uncertainty principle between them. So if you, if you're in a state of where, where the bit is certain. You're, you're either in this 0 state or the 1 state. Then in this 45 degree basis, you are maximally uncertain, right? In vice versa. So, so if we call this the, the bit and call the other one the, the sign, right? Then there's an uncertainty relationship between bit and sign, which says that you cannot know both of them with certainty. And in fact, there's a certain minimum uncertainty between the two, okay? So now you could, you could say, well, isn't there a way to get around this uncertainty principle by using entanglement like this, right? So I'll take my particle. I want to know both the bit and the sign. So I'll entangle it with another particle. And then I'll move them very far apart. And then I'll measure the bit here and the sign there. And since they both have the same bit and sign, they have the same state. So I'll figure out both these quantities and get bypass the uncertainty relation. So, so the point is, this is not how things work. So if you, if you work it out, it's actually very different. So, so when you work it out, okay. So Alice measures her qubit in the zero on basis. So she got some outcome. Let's say it's a zero. So now what does Bob see? Well, the point is, as soon as she sees a zero, Bob's state is a zero. Because that's what we said, right? If she, she gets a zero, he gets a zero with probability one. If he were to measure it. So his state is zero, okay? So now, if you, if you, if you were measuring the state zero in this UV basis, right? You see U with probability cosine squared theta and A with probability sine squared theta. So it has nothing to do with the original, you know, sign of this state. The state's got changed and now that's how it works out. Okay, so, so the, the, the, the lesson from all this is that if, if Alice and Bob measure in different bases, they, they, Alice and Bob share this, this bell state. Then the chance that they see the same outcome is cosine squared theta, where theta is the angle between their, their bases. Okay, so how do you play this CHSH game? How do you, you know, let's go back to the CHSH game and see how Alice and Bob using entanglement would actually get cosine squared pi by it. Okay, so, so remember again, the game is, okay. So now Alice and Bob share this entangled pair of qubits. Alice gets an input random, random bit x, Bob gets random bit y. They want to produce A and B with this, with status, which maximize this probability. Okay, so, so here's what Alice and Bob do. Well, if Alice does this, if x is zero, measure in the plus pi by six bases, meaning rotate the bases up by pi by six. If x is equal to one, measure in the minus three pi by 16 bases, Bob, if y is zero, measure in the minus pi by 16. If y is one, measure in the plus three pi by 16. Okay, so why, why, why on earth do they do this? Well, they do this because, you know, you pick these numbers so that, you know, there are four choices, right? For, you know, four pairs of choices for Alice and Bob. You make sure that the angle between Alice's and Bob's choice is exactly pi by eight in three of these four choices. The ones corresponding to x times y equal to zero. And it's, it's what? It's three pi by eight in the last case, where x times y is one. If x times y is zero, the two bases are pi by eight apart, if x times y is one, the two bases are three pi by eight apart. In the first case, the probabilities are equal with probability cosine squared pi by eight, which is what we wanted. In the second case, the probabilities are different with probability cosine squared pi by eight, which is what we wanted. But because this is pi by two minus pi by eight. So you get the right answer. OK, so it smells of an STP, doesn't it? So you can show that this is optimal, that you can all achieve any better than this in quantum mechanics. OK, so now let's sort of unroll back to where we started from and let's see what this might have to do with random number generation. So there was this beautiful observation made by Colbeck in his PhD thesis a couple of years ago. So he said, suppose you play this game. And suppose you were to test that, you know, suppose somehow you were able to test that the probability x times y is a plus b mod 2, you know, the probability that of this is greater than 0.75, right? Meaning that you're in the quantum regime. You can't any longer be in the classical regime. Well, then you know that the output couldn't possibly be deterministic. So why do you know this? Well, because remember, if this protocol were deterministic, then, well, at least you know that A and B both can't be deterministic, right? Because if they were deterministic, then in particular, they'd be classical, and then the 0.75 bound would hold. So by saying that the bound is greater than 0.75, you know that there's some randomness in that. You couldn't possibly do this without randomness. So that might suggest that you could generate random numbers and certify them, right? So you run this game over and over again. You test to see if the outputs violate this 0.75 condition, the Bell inequality. And if you do, then you know the output must be random. There must be some randomness in there. And then if there's randomness in there, you can always use an extractor and get that randomness out in a nice form. So there's only one problem, which is that doing all the tests seems to require much more randomness than this whole procedure produces in the first place. So that seems to be the rub. OK, so then last year, a group of people, this was a very, very long list of authors, because it also included some experimentalists. So Peronio et al, published this paper in Nature, where they showed that there's a protocol where you can use only order square root n, truly random bits, to generate these inputs x and y. So now imagine, so you have Alice and Bob. They share entanglement. You play this repeated CHSH game. You input xi, yi. You get that output ai, di. And now what you do is you generate these xi, yi. It's not truly at random, but in some pseudo-random way. And then you test whether the output satisfies the CHSH condition. And if you do that, if it passes that test, then there are, well, theta of n bits of n entropy in the output. So this means that you can use an extractor to convert this output into order n bits, theta of n bits, which are pretty close to being truly random bits. So now you can ask, well, can you do much better? And so what we can show is that there's actually a protocol which uses log n, log 1 of epsilon, truly random bits, to generate these inputs, so that some kind of simple CHSH based test, some variant on the CHSH test, if it's passed, then the output must have theta of n bits of smooth min entropy. And therefore, an extractor can be used to get theta of n bits, which are epsilon close to truly random bits. So if you want epsilon to be 1 over poly, you need log squared n truly random bits, too. OK, so now you can ask, well, how convincing is the fact that the output is random? So what's the model in which one certifies that the output is random? So here's the model. The model is the output is certifiably random, provided first the outputs pass that simple statistical test, which is this CHSH-like test, with some counting and so on. The other thing you need is that no signaling is satisfied. So the two boxes really cannot talk to each other. So for example, you just shield them from each other, or if you want to be that certain, if you want to base it on physics and principle, then you make sure that you do these measurements so close together that light didn't have a chance to go from one to the other. Now you don't have to have this no signaling condition imposed on the entire sequence. It's only for local pieces. So when you're doing these local measurements here, then you can all signal for a little while, and then again for a little while. So between these segments, the boxes can talk to each other if they want to. So in some sense, but the important thing is to believe that the output is random, you don't have to believe in quantum mechanics at all. The quantum mechanics is required only to generate these kind of conditions. Because what we are saying is, classically, you won't ever see these conditions satisfied. But if you see these conditions satisfied, you don't have to believe quantum mechanics to know that there was randomness. Just these conditions show you that the output was random. So basically, you have to believe relativity. You have to believe the simple statistical test. You can be a quantum skeptic. Einstein would have believed this. So now you can go on and you can sort of ask, well, OK, that's all very well. You get randomness. But is it also secure against quantum adversaries? So remember, there's this whole question about if it's a cryptographic protocol you want the randomness for, then it's not enough to be random. It also has to be secure. So nobody else should be able to guess that randomness. So now you turn through cryptographic eyes, you sort of look at everything adversarily. And you say, well, whoever made these two boxes, that person could be entangled with these two boxes. That person could share entanglement with both these boxes. And so it could be the case that even though that person cannot prevent these boxes from generating randomness, that person might be able to guess what the output of these boxes is. Or guess enough to sort of break the security of this random number generator. OK, so now there's a slightly more involved protocol which uses slightly more random bits to generate the inputs. And then there's this enhanced CHSH-based test such that if the output passes the test, then with high confidence, there are order n bits of smooth mean entropy in the output, even condition on Eve's knowledge. So now what that means is that if you use a quantum extractor to take these bits and boil them down to their essence, then you get theta of n bits that look one over poly n close to uniform even to Eve. So there are a couple of related papers which show something very similar. What they show is that if you look at this original Peronio protocol, it also achieves a quadratic expansion against quantum adversaries. So meaning that the original protocol also has this property that it's secure against quantum adversaries, but it only, of course, expands the input by a quadratic factor. But now once you have this kind of protocol, you can compose it. And so you can take two pairs of two boxes, which don't communicate with each other, and you can use those to achieve an exponential expansion. So let me do a sketch of how the protocol works. So here's how the protocol will work, the inputs. Now the question is, how do you design the inputs to this box? And you want to design them using very little randomness. So here's how we'll do it. We'll break up the input into blocks of some small size cave, or the log squared n. And the bits within each block are all constant. So if the block starts with a 0, it's all 0. And if it starts with a 1, it's all 1. Moreover, almost all the blocks are all 0s. So you almost always give inputs 0s. Except once in a while, you give random inputs. So there are very few random blocks, pairs of blocks, which you call the bell blocks. And for these, you pick a random input here. So this bit randomly turned out to be 0. So you make all these 0s. You pick a random bit here. It turned out to be a 1, so you make all these 1s. So overall, only log 1 over epsilon of the blocks contain these bell blocks. And the total number of blocks is order n log 1 over epsilon. And now what you do is test that the CHSH condition holds in every block. So the blocks are long enough that if the boxes are acting as advertised, then they'll pass these tests with very high probability. So by the way, the idea here is that the boxes are being prevented from cheating by the fact that we've interspersed these very few random blocks in there. Those are the check blocks. And since we are checking every block, the output of every block for the CHSH condition, these boxes don't know where we are going to test them. And so they are forced to behave well on every block. So we are testing on each block. So the chance that it passes the test is exponential in log squared n. So it's 1 over poly n. And now they are only n blocks, so the chance that you fail is 1 over poly n. So why does this work? So here's how you read it. Suppose that B's output doesn't contain much min entropy. So suppose it's not really that random after all. Well then, there must be some block such that if you condition on all the outputs that came before that block, then this particular block must be almost completely predictable, at least on input 0. So there must be some blocks. Since most of the blocks are 0s, there must be some block such that if the input is 0 there, the output is almost certainly some determined value, which is a function of what was output in the previous box. More or less. So this is sort of an easy, well, it's one of these easy statements to formalize. It's not something that you want to do because it's a little, you have to work out some numbers, but it doesn't work. OK, so now, given this condition, what we'll show is that the existence of such a block violates no signaling. OK, so how do you do that? So what we'll do is we'll play a guessing game. So Alice and Bob play the guessing game like this. So it's the same setup as before. They can't communicate with each other. They share entanglement. Alice gets an input with x, random 0, 1. Bob gets a random input with y. And then the task is Alice must guess y. So since they're not allowed to communicate with each other, the maximum chance she has of doing this is a half, so if you can prove that it's bigger than a half when you have a contradiction. So that's what we'll show, that there's a contradiction. OK, so how do they do this? Well, they simulate the protocol until they get to the chosen block. Just before the chosen block, they stop the protocol. And then they exchange all the information. So Alice says, output such and such and such and such. Bob says, actually, Bob needs to say output this, this, this, this. He tells Alice this. And then they say, OK, now let's start our guessing game. So they put up the barriers. They don't talk to each other again. Alice gets x, Bob gets y. And now Alice knows that if Bob got a 0 for his next block, then his output is almost surely b1 through bk, some deterministic function of his previous output. We're also assuming that CHSH is satisfied in this block, which you can assume with high probability. So now, since Alice got input x, she just uses x, k times. That's her new block. And she obtains an output a1. OK, so remember, why are we doing all this? Because, well, if they were really playing the CHSH game, this makes no sense. But we are saying, no matter what they're doing. So now these are some, you know, Alice and Bob are doing something crazy. We are just simulating them. And we are going to show that there's a contradiction. So Alice inputs x, k times to obtain this particular output. And now what she does is, if the distance between her output and Bob's output, the hamming distance, is at most 0.2 times k, then guess y equal to 0. Otherwise, guess y equal to 1. Where 0.2 is something that's safely above 0.15, which is 1 minus 0.85. So now, the claim is that Alice succeeds in the guessing game with probability very close to 3 quarters. How do we know that? So let's look at the two cases. The first is, so Alice doesn't know what bit Bob got, right? Why? So let's assume y is 0. If y is 0, in this case, Bob almost surely outputs b1 through bk. And then, because they satisfy the CHSH condition, each of Alice's bits is equal to the corresponding Bob's bit with probability about 0.85. So the hamming distance is going to be about 0.15 times k. And so in both cases, whether Alice gets a 0 or a 1, the string she's going to output lies within this ball of radius 0.2 times k. So now let's look at the other case, y equal to 1. Well, in this case, we have no idea what Alice doesn't know what Bob output. So let's just call that some string b1. So now, if Alice's input is a 0, her output is still going to be very close to b1 within 0.15. So her output is expected to match each bit of Bob's output with probability 0.85. But if input is 1, her output matches each bit with probability close to 0.15. So in one of the cases, her output is within a distance of 0.15. In the other case, it's at least 0.85. So these two output strings, their distance by the triangle inequality has to be at least about 0.7k. So now I claim that there's no way that both of these can fit inside this ball of radius 0.2k. They're just too far apart. So at most, one of them fell in there. And so Alice did correctly on this y equal to 1 case. So that's probably half that she's correct. And on the y equal to 1 case, well, one of the two cases, she got correct. Because she must have answered one or at least one of them. So she's correct with probability at least 3 quarters. So now there's a lot of room to play. Because to get the contradiction, we just wanted to beat a half. But I was very sloppy in this argument. I was assuming that Alice could exactly guess Bob's string and so on, and that you were exactly at 0.85. So you can now weaken all these parameters, and everything works out. So now how do you move on to reasoning about security against quantum adversaries? So if you look back at this argument, the crux of the argument was that if B's output does not contain much min entropy, if it's not very random, then there must be a block which is almost deterministic. And then you can exploit it to get an advantage in the guessing game. But now in this quantum adversary situation, we are conceding that the output is random. All we are saying is that we want to prevent Eve from guessing that randomness. So for contradiction, we'd have to assume maybe the output is completely random, but Eve can predict what it is, or she can predict at least a bit of it, or she can predict some of it, too much of it. So now it's clear that we can't use the previous approach, because we can now say there must be some block that's deterministic. OK, so here's the approach instead. So what we do is we use a particular, OK, so we have the reason based on a particular extractor. So we are going to take the output, Bob's output, and we are going to run it through a particular extractor, which is an extractor against quantum storage. So this is Traversan's extractor using a particular code, the TXR code. And then we'll show that if Eve can distinguish the extractor output from uniform, then actually what Alice can do is she can provide a little bit of information to Eve, information that's available to her from her output alone, together that allows Eve to reconstruct the entire output of the extractor. So this is the so-called reconstruction paradigm that Traversan extractor follows. Except that in that reconstruction paradigm, if you're doing it in a quantum setting, you have to do a measurement to get each bit. And then if you're doing it in a quantum setting, when you do a measurement to get one bit of the output of the extractor, that ruins Eve's state. And then it's not clear that you can get another bit. And so you sort of have to these particular extractors using have somehow the right properties so that you can actually do all this and work all this out. But once you do this, then we are back in business and we can use the guessing game to derive contradiction. Now the interesting fact is that the use of this particular extractor was for the purposes of proof. Because once you prove that the output of this extractor is really indistinguishable from random, you've actually proved that the output of the, you know, Eve's output, even conditioned on Eve's knowledge, has a lot of min entropy. So once you know that there's a lot of min entropy, you can use any extractor you want to get that randomness out. So I guess one could get into a lot of open questions. Well, maybe in terms of the usual open questions, maybe the only one that I'd mention is, so as I said, at least in the simple setting, the number of random bits that you need in order to make all this work is log n times log 1 over epsilon. Now, it's completely, you know, it's sort of, if you think about it for a little while, it's clear that log n random bits are necessary, right? So because if you're going to do any kind of testing, and if you don't have any randomness at all, then the manufacturer of the boxes knows exactly what testing you're going to do, and they can tell you exactly what you want to hear, right? So clearly you need some randomness, and if you work through it, it's clear that you need log n bits of randomness. Now, whether you need the log 1 over epsilon is not clear. So that's one of the questions. But more generally, I mean, sort of getting away from the parameters and sort of these kinds of open questions, I think there's something more interesting at play here, which is, and I don't know quite what it is, but somehow I think that there's some interesting interplay between entanglement and randomness, no signaling. So just as the CHSH game provided these new insights into quantum mechanics and locality, I think there's some kind of insight to be derived by looking at this further, and not in exactly this way, but in some imaginative way. Thank you.