 a colloquium series named in honor of our three late colleague, Rajiv Matawani. So Matawani distinguishes lectures. They're definitely designed to be theoretically oriented, so they're aimed at a theory audience, but broadly so. So for example, I tell the speakers to imagine they're giving a keynote talk at a conference like Stockbox or Soda. And so we get fantastic speakers, and this quarter is no exception. So I'm very happy to introduce Umesh Bazarani, who's the Rajesh Stratz professor at UC Berkeley. He's also the director of the Center in Quantum Computation there at Berkeley. So he's really one of the intellectual founders of quantum computation, and that's part of what we'll be talking about today. He's also done seminal work in foundations of randomness, which I guess we'll also thank for any of the time today. And I should say one thing, I am sort of in awe of Umesh Abali's, his advising record with PG students is just unreal. So just to pick three names of random, you know, Sinjib Arora, Madhu Sudan, Scott Ironson, or all Ph.D. students of his. So today he'll talk about certifiable quantum advice, Umesh. Oh, thanks. Thank you. Thank you. It's a real honor to be here, and especially with this Motwani Memorial lecture, I guess you remember him fondly. So, OK. So, you know, before I was, I prepared this, you know, I was thinking about how to give this talk, and I realized that maybe I should try and experiment and show you the beginning and the end of the talk, and then we'll get to the middle. OK, so it's, you know, often Dilbert is a good place to start, because, you know, you can get right to the core of what you're going to speak about. So that's really the topic of the talk. To get to the end of the talk, you know, basically that's where we are trying to get to. OK, so let's start. OK, so this is what we are trying to, this is the question we are trying to get at. So let's say you have a set of dice, and you want to know whether they are really good dice, whether the outcome is really random. How do you certify it? So if you know that the dice are memory-less, so, you know, they're six-sided, it's not a big deal, you just toss them many, many, many times and then collect samples and check that you see one through six, you know, equally likely. But now, what happens if you have a random number generator? So it's a box which outputs for you 10,000 bits, which are supposed to be random, OK? So, and you don't know what's in this box, right? So it could have memory, so you can't collect statistics because, for example, the output could be 01010101 and has the right statistics, so what would you do? So let me try to say this question a little more precisely, right? So let's say that, OK, here are the rules of the game. You can provide the specifications for what goes inside the box and then you hand them over to somebody who's going to implement the box for you. This person can completely ignore your specifications and give you back anything. Then you get to use the box only once and you can perform any test that you want on the output and your test should satisfy these two guarantees. So if the box was manufactured according to specifications, then your test should accept the output with very high probability, almost always. On the other hand, no matter what, if it passes your test, if the output passes your test, then the output must indeed be statistically close to random, even no matter who manufactured your box, even if an adversary did. OK, so this is sort of what we would understand to be certifying that the output is random. OK, so random number generators, they go back to the very first commercially available computer. There was a project to build a physical random number generator associated with it. Actually, Alan Turing was involved in that project. The project was eventually unsuccessful. So it's not such an easy problem to design true random number generators. Actually, very recently, a few months ago, Intel announced that they have an on-chip random number generator. OK, so let me just make a few observations before we go on. So this whole notion of what is randomness, this is a very big topic. And you can, you know, but on the other hand, to begin with, random bit sequences, there are basic resources in a number of areas, cryptography, game theoretic protocols, physical simulation algorithms. And actually there's, I believe, before the computing world changed and most cycles today probably go towards mobile phones and so on. But at some point, the majority of computer cycles went for physical simulation. So the point is that the quality of randomness you need in these different applications is quite different because in cryptography, you don't just want the output to be random, you also want it to be secure against adversaries. So that's a very different notion. Then there's this whole question about what's randomness in the first place. So if you're working within classical physics, this is really a very difficult and more problem because after all, classical physics is deterministic. So when you say something is random, you mean either you have a lack of knowledge like in dice where if you could figure out the initial configurations exactly and the forces exactly you'd know you'd be able to predict. Or computational power. So there are many discussions of this in terms of coin flipping or computational randomness, algorithmic randomness like Kolmogorov, etc. Or pseudo random generators where it's lack of knowledge or computing power. Okay, so there was an approach to this entire subject that was taken in CS in the mid 80s, which said something like this. Assume that you have a physical source of randomness that, assume that randomness is whatever it is, we are not going to get into these philosophical questions about what randomness is. We'll just assume that since we are trying to build random number generators, that we can at least build generators which output some randomness. And it's low quality randomness, there's going to be correlations and so on. Maybe there's an adversary who is in charge of doling out the randomness a little bit at a time. And now can we use it to generate real randomness? And so there were two kinds of results that were proved. One is that if you have multiple non-communicating sources, like two non-communicating sources, how do you take the outputs? Even though each one by itself has this correlated small amount of randomness leaking out and combine this randomness to get really close to uniformly random bits, bit strings. And the other kind of sequence of results said something like this. If you had such a source of randomness which leaked a small amount of randomness in terms of minentropy, how do you convert its output into a close to uniform distribution by feeding in just a small number of truly random bits? So say you wanted to output n random bits, you were allowed to sample a big O of n or some large number of random bits from the source. And then you feed in log n truly random bits and get, and massage these low quality random bits into close to truly random bits. Okay, so classically this seemed like the end of the road, right? Because clearly you have to assume something about the randomness output by the source. And then these results seem to make minimal assumptions about the source of randomness. So it seemed intuitively clear that there's nowhere else to go. And the surprising thing is that quantum mechanics allows you to get away past all this. So let me just give you a sense of what kind of, what's our goal? What are we trying to get at? So here's what we want to achieve, something like this. So we have some quantum boxes, which are going to generate some sort of outputs. We'll assume that these two boxes cannot talk to each other. They cannot communicate with each other. So maybe because of speed of light considerations or insulation, whatever. We are going to feed in some small number of truly random bits as C. And we'll use these to massage the output into n bits, which are epsilon close to the uniform distribution in total variation distance. But moreover, we'll be able to perform tests on the output. And we'll be able to certify that this particular output was actually a sample from some distribution, which is epsilon close to the uniform distribution, right? Without making any assumptions about what these black boxes look like. So another way you can think about this is that what this allows you to do is it gives you actually an implementation of a weak random source in a certifiable way. So you don't have to trust the person who built the box. You don't have to trust anybody. You do your own testing and you do your own testing only on that particular output that's being generated. It's sort of, this should strike you as being patently impossible. But this is what quantum mechanics allows. It's a very strange aspect. Okay, so let me just- Would the test be probabilistic in itself, or would it be? Yes, yes. Maybe it's a probability, but it's fine. Well, yeah, so in fact, so one can show that some number of random bits is necessary. Right, so if you were going to perform a deterministic test, then- I don't have an output to test on the string. Yeah, yeah, so these random bits are going to be used both to massage the output as well as to test. And you cannot possibly perform a deterministic test because if you announce your test in advance, then of course it's easy for somebody to manufacture the boxes in such a way that it passes the test with a fixed string. So that's why there's, at least the log 1 over epsilon is going to be necessary. So I hope that this talk is also an excuse for you to learn. If you don't know already, just a tiny amount about quantum computing. Okay, so where the tiny amount might be just a little more than this slide. Okay, but you have to start somewhere. So, okay, so first what's a qubit? You know, a qubit is, for instance, the state of this electron in a hydrogen atom, where it can be either in the ground state or the excited state. So if it was classical, you'd represent, say, 0 by the ground state and 1 by excited state. Now what quantum mechanics says is that in general, the state of the electron is going to be a superposition of ground and excited. And the way we write it is as alpha 0 plus beta 1, where alpha and beta are complex numbers. Okay, there's also the normalization condition that alpha squared plus beta squared is 1, which means that the state is a unit vector in this two-dimensional complex vector space. When you measure, you see 0 with probability alpha squared, 1 with probability beta squared, so hence the normalization. And moreover, when you measure, and if the output is 0, then actually the state of the system changes, and it's no longer psi, but it actually gets projected onto 0. So the new state is going to be whatever your measurement outcome is. So now, given all this, there's an easy recipe for generating randomness. And that's one of the points about quantum mechanics that it provides you with an intrinsic source of randomness through this Born rule or this rule of measurement of quantum states. So clearly, if you wanted to generate a random bit, what you would do is create this superposition, a superposition of 0 and 1, which is easy to do, at least in principle. And you'd measure, and there it is, you have a random bit, okay? The question here though is, this would be not a certifiable way of generating the randomness because you'd have to trust the experimentalist who actually implemented this for you. And you'd also have to trust that even if you trusted the experimentalist, you'd also have to trust that you don't have to keep recalibrating your apparatus and so on. Okay, so now, how do we actually start moving towards certifiable randomness? So the key to this is something called quantum entanglement. And this entanglement, this is a really fundamental aspect. Of quantum mechanics. It goes back to this very famous EPR paradox. And, okay, so, which we'll talk about in a bit. But there's this very interesting quote I found from Urban Schrodinger, from a paper that he wrote in a follow up to the EPR paper. Which sounds extremely modern, you know? So this is the way we think of entanglement today. But it's really surprising that anybody thought of it this way in 1935. And then it seems people forgot about all this for over 50 years. So where he says entanglement is not one, but rather the most important, the most quantum of a characteristic of quantum mechanics. Now it turns out that, actually about 30 years later, so this thought experiment, the EPR paradox, was Einstein's way of sort of challenging quantum mechanics, which he never liked. He didn't like the fact that it was probabilistic and he was looking for a deterministic analog of quantum mechanics. The so-called local hidden variable theory. And he tried for a long time to, he spent his last days trying to find such a deterministic analog. But then about 30 years later, John Bell showed that you could take this EPR thought experiment and if you analyzed it correctly and you came up with the right variant on it, you could actually come up with an experiment that distinguished between quantum mechanics and any local hidden variable theory. Any of these deterministic theories that Einstein was trying to formulate. So in some sense he was saying, the kind of theory that Einstein wanted, it sits in the following, for our purposes, the following computational model. Whereas quantum mechanics does not lie in that class. And so let me describe to you what this experiment is or actually a variant of this experiment that's much easier to understand. And which has actually been carried out in practice many times. And it's always given results which are compatible with quantum mechanics and incompatible with these local hidden variable theories. So this variant is called the CHSH game. It's some sort of, if you want, it's like a communication complexity game. Except that it's communication complexity with two players, inputs, outputs, and you want to say that the communication complexity is zero. So there's no communication between Alice and Bob in this game. Okay, so the game is played like this. Both Alice and Bob, each of them gets a random bit, X and Y, uniform 50, 50, zero, one. And they're supposed to output a bit each, A and B. What they're trying to do is, they're trying to satisfy this condition. X times Y equal to A plus B mod 2. So it's clear, right? So in three of the four cases, X times Y is zero. And they can achieve this by each outputting zero. So that's the easy part. And now you can convince yourself quite easily that classically it's impossible to do any better than 0.75. So the crux of the matter is that if Alice and Bob are allowed to share entangled qubits, then they can achieve success probability of cosine squared pi by 8, which is 0.85. So what are these entangled qubits? So basically, here's an experiment which says classically and basically any hidden variable theory, any local hidden variable theory, any theory in a certain class. You can show that the maximum success probability is 0.75. Quantum mechanics gives you a much higher success probability. And this is something that you can actually, an experiment you can actually perform. In fact, you can perform it, at this point, there's a student in my class who just performed it and got a standard deviation over the classical bound. So okay, so what's the, what does it mean Alice and Bob share entangled qubits? So by that, what we mean is, let's say that there are two qubits, right? One that Alice has, one that Bob has, and now the joint state is this, is an equal superposition of 0, 0, and 1, 1. So either both qubits are in the state 0, or both are in state 1, and with equal amplitude. So, so if you were to measure the first qubit, then you see 0 or 1 with equal probability. But now the rules of measurement tell you that if you saw the first qubit as a 0, the second qubit, if you measure it, you'll see the same outcome, 0, right? If you see the first one as 1, you'll see the second one as 1, okay? And this doesn't, doesn't depend upon how close the two qubits are. If they are entangled, then you separate them way to being very distant from each other, the same result holds. So it doesn't matter which one you measure first, you know, in what order, etc. You'll always get the same outcome. Okay, so so far this is not distinguishable from just sharing randomness. So here's what makes this really different, okay? So instead of measuring the qubit in this 0, 1 basis, you can also measure it in any basis. You can rotate the basis in which you're measuring the qubit, okay? And so now what you have, your qubit is a state, right? And if you're measuring it in some rotated basis, the probability you get the outcome u or u perp is just cosine square theta, sine square theta, where theta is that angle, okay? So now what this says is that it doesn't matter, you know, it doesn't matter what basis you're measuring the left qubit and the right qubit, as long as it's the same one, you'll, you know, the left and the right bases are the same, same. You'll always get the same outcome on both sides, right? Not just in the standard basis, but in any rotated basis as well. Also, if you measure in the u, u perp basis on the left and the v, v perp basis on the right, and there's an angle theta between u and v, then the chance that you get matching outcomes. So, you know, on the two sides, that you get u on this side and v on that side is cosine square theta, right? Okay, so this is where quantum mechanics deviates from, from, you know, there's just no classical analog to this. And this was the basis of Bell's inequality. Imagine that. So how did this become, how did this become a difference from the classic? Because, you know, you could use the same example, you know, left shoe, right shoe with randomly, and then do an imperfect measurement. Left. Like, you know, so let's say, you know, I take my left shoe with one box, right shoe with some one box, choose one of them at randomly, and then we take them far away. Yeah. And then both of us do some imperfect measurement of whether it's true or not. Well, it's, yeah, it's the nature of the correlations. So let me tell you in just a moment what the, you know, okay. So, you know, if it was in one basis, then you could just say, you flipped the coin in advance when the two particles were together, and then each of them remembers, you know, zero, okay. So they are far apart, and when you make a measurement, they just report that value. But now what we're saying is they don't only have to know this outcome for this basis, but they have to know the outcome for every basis, right? And moreover, when you do the measurement, if these particles are far apart, this particle doesn't know in which basis you're measuring that one. Okay, so they cannot, so they have to coordinate without knowing which question you asked the other one. And the claim is that, that, that quantumly they can, they can achieve this kind of correlation. And what we'll see is that this gives you, for that CHSH game, this gives you a, you know, an outcome which you just cannot explain classically. Right? It's, the correlation is, is, is very different. You need to get special shoes for this. I'm sorry? You need to get good shoes for this. Yeah, you need, you need, you need these, these, these exponentially large shoes, right? Yeah, they're, your Hilbert space has to be very different or something, you know, it's a, yeah, so. Okay, so, so how do you, how do you play the CHSH game quantumly? Well, here, here's how you do it. Well, Alice does this if, if her input bit is zero, she measures in the standard basis, the, the zero one basis. If it's one, she measures in the pi by four rotated basis. If Bob hasn't put zero, he measures in the pi by eight rotated basis. And if one, the minus pi by eight rotated basis. So, so why does this work? Well, you can sort of check that in three of the four cases where the answer is supposed to be zero, the angle between Alice's choice and Bob's choice is exactly pi by eight, okay? There's only one choice where, when x equal to one and y equal to one, where the angle is actually pi by two minus pi by eight. But they're also supposed to announce opposite answers. And so cosine squared pi by two minus pi by eight, which is, you know, is, is exactly one minus, you know, so, so one minus cosine squared pi by eight. So, so you get success probability equal to cosine squared pi by eight in each of the four cases. So if you think about it, what quantum mechanics is doing, is it's implementing the optimal STP solution to this problem. It's a vector solution. You know, it's, this is what quantum mechanics does, okay? And actually, you can, there's, there's this beautiful proof by Searleson, which shows that there, actually, this STP solution is optimal. That there's, there's no better thing that you can do. So quantumly, you, you achieve the STP optimal. Yeah, there, you know, it's, it, it keeps showing up in many, many places. And yeah, in fact, there's a, there, there's really this feeling in the air that maybe, you know, it, it could well be that if you, if you work with STPs, at some point you, you're going to have to learn about quantum mechanics. Okay, so, so now there was this, actually, so maybe, maybe I'll, maybe I'll go forward there. So there, there's this, there's a, there's this area of work called device independent cryptography, quantum cryptography. Which tries to work on, you know, quantum key distribution, where you try to share a random key in between two distant players. But you want to make, you want to create protocols where you don't really trust the devices that are doing the cryptography. And people had been working on this for a long time. But then a few years ago, somebody, you know, John Colbeck made an observation. It was a very simple observation, but I think a beautiful observation. So he said, well, suppose that you do, you know, do the CHSH test. And suppose that you're actually sure that the probability that, that the CHSH condition is, is, is met is, is strictly bigger than 0.75. Then the output of each player cannot be deterministic. Because remember, any local deterministic theory, we, we said, you can achieve at most 0.75. So if you exceed 0.75, you must have some randomness in the output. So this would seem to indicate that you should be able to test for randomness by testing for quantumness, right? And, but there's a problem, which is that in order to do this test for randomness, you only generate one bit, one, one random bit. Maybe two if you think of A and B as independent, but we have no reason to, you know, leave that bit. But on the other hand, we have to input two random bits to make all this happen, so it seems like a bit of a losing proposition. Okay, so, so, so there, there was a paper by Piranha at all. It, it's actually, it appeared in Nature in 2010, which showed that, you know, which gave a scheme by which you could take square root and random bits and, and you could, you could take the, you know, you could use the square root and random bits to, to come up with n inputs to the two boxes. So these are not uniformly random, but they are something such that the output of these boxes must have order n bits of min entropy. So there's order n bits of, of, of randomness hiding in those, in those, in, in the, in the output. And so now you could use an extractor to, you know, to convert the output into order n bits, which are very close to being truly random bits. So you could get roughly constant times n bits of randomness by inputting just square root n bits, okay? So, so, so what we do is, well, there are two things. You know, there are two issues. One was, can you, can you actually improve the, can you actually expand the randomness by an exponential factor? So that's the first thing that, that, that we do. Where you, you start with log n, log 1 over epsilon random bits. And you end up with order n bits, which are bits of min entropy or order n bits of, which are close to random. And there, you, you can make sure that the output is epsilon close to truly random bits. So, you know, depending upon epsilon, you can choose your, choose the number of input random bits appropriately. And to do this, you have to use a variant on the CHSH test, which I'll describe to you. It's a small variant. Okay. So let me, let me say in what sense certify it. Okay, so this is actually an interesting question. So, so, so the point is, you, you get these random bits and moreover they are certified to be random. So let's see what it means certified. So, actually there, so I put down two conditions there, you, you really, okay. So the output is certifiably random, provided three conditions are met. The first is that the small random seed you started from is truly random. So that you, there's no getting away from. The second is that the outputs pass this simple statistical test, which is built out of the CHSH test. And the third is that no, there can be no signaling between the two boxes, okay. So, again, as I'll, as I'll show you in a, when I show you the protocol. In fact, this no signaling condition doesn't have to be maintained throughout the protocol. You can actually break the protocol down into phases or short, short phases. And within each phase, there must be no signaling. But between phases, you can, you can allow the boxes to communicate. So for short bursts of time, you must make sure that there's no signal between the two, two boxes, okay. But then at the, at the end of this process, you end up with a proof that the output must be close to uniformly random. Where the, where the, where the, where the proof does not in any way depend upon the correctness of quantum mechanics. So, so, it doesn't matter how you, you know, you, you, you violated this 0.75 bound, right. It doesn't have to be through quantum mechanics. Quantum mechanics gives a way of achieving bigger than 0.75 bias in this CHSH game. But if reality is different, then it gives you bigger than 0.75. Then the output is guaranteed to be random nonetheless. Is the only way you know how to beat 0.75, right off the corner? Absolutely. Absolutely. Okay, but then you can ask a different question. You can ask, well, the output is random. But what about, what about if you were using it for cryptography? Is it, is it, if you use this, you know, if you use that random output, the, the randomness output by this, by this protocol to do, to do cryptography, would your protocol be secure? So there the answer, there, there the question is really, can somebody else guess that randomness? Would they have any better chance than, than whatever, one over two to then? So in particular, here's the worry. So the manufacturer of the box, E, actually entangles herself with both of these boxes. So she, she manufactures these boxes. You know, she builds in the entanglement, but then she entangles herself with these two boxes. So now, everything we said so far is correct. You know, the output is still random, and it's probably random. But it could be that Eve might be able to, she might be able to reconstruct that randomness exactly. Right, so it's, it's, it's no use for cryptographic purposes. Okay, so, so what we can show is that there's a, there's a suptap protocol, which now requires more random bits as input, log cube then, and you have to change the test even more. And, and what you get as a result is that if you apply a quantum extractor, you can, you can get order n bits which look one over poly n close to uniform, even to Eve. So they are really, really random in the cryptographic sense. Too much, but is Eve computing a classical computation? Is Eve, Eve can do, you know, Eve can, let's say that this is all information theoretic, right? So Eve can do, have infinite computing power, but she, she is also quantum, right? She can store quantum information, yeah? But it's not an adversary, it's not the algorithm. No, no, Eve is, Eve is an adversary. Sorry, so maybe, maybe I should have put the box up here, right? So, so let's say the box was, was up here and Eve is outside of the box and- So Eve is not the algorithm that applies this, is it? No, no, no, no, no, Eve, Eve is the, sorry, okay. So, so let's, let's not look at that diagram too closely. Eve is the, is this evil manufacturer of the box. You gave the specifications to her. She made the box for you, or the box is for you. Okay, and, and now you, you know, you, you are, you know, so you, so you are trustingly using those boxes to generate randomness, and- It's an entity that is arbitrarily entangled with A and B. Right, right, and, and Eve is sitting in her own lab sort of figuring out what your randomness must be and then, right, so, so. Okay, so there, there are some, some related results. So it turns out that this, this, the Peronio et al paper from 2010 actually had a, quite a subtle bug in it. And so there were two papers that both fixed that, that bug in recently and gave a rigorous proof of it. And they also showed in the process that this, this, this protocol is composable. So what this allowed them to show is that you can get this exponential expansion to use, use only log n bits, or, or the log, log or log squared n bits as seed to generate n bits of randomness. But now you have to use four non-signalling boxes, four independent boxes. So when, when they actually, you know, early, early on in, when they first, first came up with this, they actually thought, thought that they could also deal with the cryptographic case. You know, the entangled eave. But it turns out that, again, there's a, there's a, there's, there's, there are these very subtle issues that creep into their proof and it doesn't seem to work. So, so it does seem like, okay, I'll, I'll give you a very small sketch of what we do. And we actually have to work, you know, there are some hoops we have to jump, jump through to make it all work. And as far, you know, this sort of gives us a little more confidence that maybe those hoops are necessary and it's not just because, you know, we picked the, the, the first thing that came to mind. Okay, so let me, let me describe, okay, so here's what I'm going to do for the rest. I'll describe to you the simpler protocol and then I'll just give you a one slide sketch of, you know, what are the main ideas behind the more complex protocol. Yeah. Just another question. So you said that you have like two or four, whatever, like, different boxes and in order to, in order to succeed, these boxes need to share entanglement. Yeah. So where's this entanglement coming from? Right. Does it need to be refreshed, does it need to be? So that, that's, that's, that's to be, you know, so it's whatever, whatever is done. So it depends, you know, if you can store entanglement, maybe you have it set up already. If it's, if, if it has to be communicated, there are ways of communicating entanglement and then distilling it. So, so as, as I'll show you in this proof, you can allow the two, two boxes to communicate with each other, between rounds. And as, as long as within each round there's no communication, everything goes through. So they will sort of. Yeah, they can, so every, if they run out of entanglement, they could communicate and build up more and then, then you could do this again, right? Okay. Okay, so, so here's, okay, so here, here's the, so what we've got to figure out to make all this work out is, what do these inputs, x1 through xn and y1 through yn, look like? Okay, so, so what do the inputs x and y look like? So here, here's what they look like. So they're going to be divided up into blocks of length k, where k is order log n plus log 1 over epsilon, okay? Where the bits within each block are equal. So, so each block consists of either all zeros or all ones. And, and moreover, almost all the blocks are zero blocks. Except once, except for once in a while, when we, when we have a pair of blocks, corresponding blocks, which are called bell blocks. And then the, you know, whether they are zero blocks or one blocks is chosen uniformly at random, okay? So there are very few bell blocks, only order log 1 over epsilon. Where the total number of blocks is order n times log 1 over epsilon. And what we're going to do, is we're going to feed these in. So we'll, we'll use our small amount of random, number of random bits to, to generate this string. And then, from then on, the inputs are not adaptive, so we just keep feeding them in and collecting the outputs. And we're going to test that for each block, the outputs satisfy the CHSH condition to whatever degree we want to test it. So if, if our goal is to make sure that, that the correlation is at least 0.8, right? We'll make sure that that's true within each block. And we've chosen the parameters so that if, if you're really working quantumly then, then you satisfy these conditions in each block with very high probability. Okay, so, so now how do we show that, that, that the output of, of this, of this protocol is really random, right? So we want to look at Bob's output, B. And we want to say, this must be a really random output, okay? So, so remember what we are doing. So most of the time, even the two players know most, most blocks are going to consist of all zeros. And so they might try to, try to, you know, they might, if they knew that it was all zeros, they could just output zeros. So no randomness. But every so often, very infrequently, we check that they are, they're doing the right thing by using a bell block. And they don't know where the bell blocks occur. And so what we're going to show is that because they don't know, they are forced to be honest on all the blocks, okay? So, they're all equal to each other. But, but which bit it is, is chosen at random. So it's 50% zero, 50% one. Independently in X and Y. Independently in X and Y. So if you're seeing ones, you know you're going to tell a block. If, if you see ones, you know, and you are, okay. So if, if Alice sees a block of, of ones, she knows that this is a bell block. But Bob, she doesn't know whether Bob knows that it's a bell block. And so she doesn't know whether she, you know, if she tries to switch strategy because it's a bell block, she doesn't know what to do. And whether Bob will react to it or not. And so, so that, that's the coordination problem. So now we want to show that this is really a coordination problem. And it won't, you know, there's no way that they can coordinate. Okay, so, so here's how we'll show that. So, so we'll start by saying, suppose B's output does not contain much min entropy. There's not much randomness in B's output. So that string is, looks, you know, doesn't have order and bits of randomness. Okay, so then what we can show, you know, then it follows that there must be some block where conditioned on what came before, that block of bits is almost completely deterministic, right? So if there's not much randomness in the, in this output, then conditioned on, you know, there must be some block which is, which is determined by what came in previous blocks. Almost completely determined. So now we're going to use this block to, to show that, that there must be signaling between the two boxes. Okay, and so to do this, we define a guessing game. So, so this guessing game is, goes like this. It's sort of a trivial game. It's, it's a, it's a very silly game if you weren't using it in a proof. So, so the game is like this. Alice and Bob share entanglement. Alice and Bob get random inputs. Alice gets x, which is 50, 50, 0, 1. Bob gets y, which is 50, 50, 0, 1. Independent, and then Alice is asked to guess Bob's input. Okay, it's, I mean, it's a silly game, but that's, that's what, that's what we need to use too. Okay, so what we're going to show, and it's clear, right, that it, you know, Alice cannot guess with probability different from 50%, higher or lower, right? So, I'm sorry? Even if, yeah, yeah, yeah. So in the absence of signaling, there's nothing you can do. So now, okay, so, so how do we, so now what we're going to do is, we're going to show that if there wasn't much, much randomness in the output, therefore there was this block with, with low conditional entropy. So, which is almost deterministic given the past outputs. Then they can use that block to actually signal each other, right? So they violate no signaling. So, so the, the, the way they're, okay, so, so here, here's how they will proceed, so to set it up, let's assume that Bob's output on input zero in this block, in this particular block is almost deterministic and let's say it's B naught. So we look at all the previous outputs and, and given those previous outputs, we know that if in the next block in, in this, in this chosen block, Bob gets input zeros, then his output must be B naught, which is, which is this block, B1 through Bk. We'll assume that the CHSH condition is satisfied for the block which we were checking everywhere. Now Alice, okay, so, so, okay, so here's, here's what we did. We, we simulated the protocol until we got to the chosen block, right? Both, both of them simulated it. When they got, just before they got to the chosen block, they got together and they talked to each other and Bob said, I output such and such and such and such so far, you know, she, he tells Alice and, and now they separate again and now they're not allowed to communicate. So now Alice knows that Bob's output on, on zero for the next, next block is supposed to be B naught. So she knows B, B1 through Bk. She gets as input, you know, she inputs x, x, x, x, x, you know, whatever her input in the guessing game is into the protocol. And she sees what her output is. So let's say it's A1 through Ak, okay? So she just checks whether the hamming distance between A and B is less than 0.2 or greater than 0.2. If it's less, she guesses that y must be zero and otherwise one, okay? And I claim that this, this, this strategy allows Alice to guess Bob's output with probability greater than 50%. And in fact, let me show you that if we are, you know, if, if we are assuming that, that this, that, that the CHSH bound is 0.85, they are, they have saturated the CHSH bound. Then, then we'll actually show that Alice succeeds with probability three quarters. Okay? So, so here's, here's how we prove it. So let's, there are two cases, right? So, so either Bob's, Bob's input is zero or one. So let's say his input is zero, okay? In this case, for the CHSH condition to hold, we want that, we want that for each bit, ai x plus bi mod 2 must be zero, meaning ai equal to bi, right? So we, we expect the a string to be exactly equal to the b string. Well, not, not exactly if you're, you know, if you're, if you're, you, you, you should be, it should be equal 0.85% of the time, 85% of the time. And so the distance between the a and the b string should be 0.15, right? So the hamming distance should be 0.15. So if you draw a hamming ball around b naught of radius 0.2, then regardless of, regardless of whether Alice caught as input 0 or 1, right? Her outputs, her output string must be within this point 2 hamming ball, okay? So, so half the time she's guessed correctly. So whenever y equal to 0, Alice always guesses correctly. So now let's see what happens in the case y equal to 1. Let's, so if we can show that either on input x equal to 0 or x equal to 1 for Alice, she guesses correctly, then, then we'd have shown that there's a 3 quarter of chance altogether of guessing correctly. So here's how we show that. So in this case, Bob's input is 1, which means we have no idea what his output looks like. Alice has no idea what his output looks like. She only knew that if his input was 0, then his output is almost certainly b naught. But now his output is some string that she doesn't know, let's say b1, okay? But now, in order to satisfy the CHSH condition, if x equal to 0, then Alice and Bob's outputs must match. And if x equal to 1, they must, they must be, they must not match. They must be the opposite, right? Meaning that for each bit, there's some particular output such that Alice's output is supposed to match each bit with probability 0.85 in the case that her input is 0. And on each bit, it's supposed to match with probability 0.15 in the case that her bit is 1, right? So let's say that a naught prime and a1 prime are her two output strings in the two cases. Then on each bit string, they on average match up with probability at most 0.3. Meaning the distance between a naught prime and a1 prime is at least 0.7, right? But, okay, so now what the question we are asking is, so we don't know where a naught prime and a1 prime are located relative to b naught. But the question we are asking when Alice decides whether to guess 0 or 1 is whether her output is within 0.2 of this guess string b, b naught. And the point is inside this hamming ball of diameter 0.4, radius 0.2, you can have at most one of these two strings, not both, right? So at least one of them is outside and so at least on at least one of them, she must output one. And so that gives you this extra one-quarter probability, okay? So now I did this analysis assuming that we satisfied CHSH perfectly at 0.85. We saturated it. But now if you assume anything slightly above 0.75, 0.75 plus epsilon, then you get some half plus some function of epsilon. And so you get a contradiction all the way down to anything slightly better than classical, okay? Okay, so that's the proof of the simpler case. Now to deal with quantum adversaries, it gets a lot trickier. Okay, so here's the problem. So the crux of this argument I just showed you is that if b's output does not contain much main entropy, then there's some block of his output, which is almost completely determined given the past blocks. And now you can play a guessing game on this block to derive a contradiction, right, Alice and Bob play a guessing game, okay? In this quantum adversary setting, it could be that Bob's output is perfectly random, okay? But what we want to show is that Eve cannot guess it. So we really cannot use that approach the way it is up there, okay? So what do we do instead? So what we show is that if we use a particular extractor, which is the Travisan extractor, or what's locally here called Luca's extractor. Using a particular implementation, using Tx or codes, okay? So this particular extractor was proved to be secure also against quantum storage by Anindya Dey and Thomas Vidic. I guess one of each of our students. Then, okay, so the point is that let's use this extractor to take Bob's output and try to get random output from it. So then the proof of this extractor goes through what's called the reconstruction paradigm, meaning that, okay, so using the proof and actually something else in quantum computing, which is that, okay, so there's some associated work of Terhal et al. on pretty good measurements, which sort of implies that, in fact, if the output of this extractor is not random, then actually Alice can provide Eve with some small amount of side information, which allows Eve to reconstruct the output of the extractor. And now once Eve can reconstruct the output of the extractor, we are back in the situation above, where there's somebody, so we put Alice and Eve together and they can guess Bob's input. So they can solve the guessing game. The combination of Alice and Eve without communicating with Bob can solve the guessing game, okay. Okay, so that's sort of the big picture of this part. There's this very interesting last step to it, which is, of course, what this showed is that if you use this particular extractor, the Traversan extractor using TXOR codes, then the output must be very close to random uniform. But of course, if the output is now close to uniform, there must be a lot of min entropy that you started from. And therefore it shows that it didn't matter whether you use this extractor, you could use any extractor, which is quantum secure, and that would work. Okay, so this is the construction of these certifiable quantum for certifiable random generators. I don't think we really understand this thing too well yet. So I at least don't feel that we really understand what the certification means and where it's going, so on. So it's really something we are trying to work through and understand. Well, both what it means to be able to certify something as random as well as how one can use it in other ways. And there seem to be some approaches. So, thank you. Someone is not just a quantum skeptic, but is a randomist skeptic as the Einstein quote seems to suggest. So what would Einstein think about that? I mean, the setup of the problem and what we're asking to accomplish involves kind of believing some model of the world in which randomness exists and we can reason about it. Right. Yeah, so somehow you're right. So just to get off the ground, one has to believe that there's a small random seed that we start from. And if you don't assume that, then the argument doesn't get off the ground. But once you assume that, then the rest of it is completely independent. So that's being used to actually test. So somehow somebody could say, well, maybe the entire world is against me and there's a completely different, you know, so against me or maybe somebody has a different view. Perhaps the entire universe is conspiring in my favor but none of the accepted models is true and really there's a conspiracy to feed me these inputs and I'm inside a movie or virtual reality and it has its own laws and then there's nothing you can do about it unless you assume that there's some private randomness which allows you to test that virtual reality environment which that virtual reality environment is not aware of. So you need something to get off the ground. So the model is some kind of private randomness in that one hand that's no signaling has something to do with nothing of communicating the values of that private randomness between the two boxes. Right. Yeah. I'll be coming down a little bit. So you're using log n times log n times randomness because every block needs to be a lot of n over epsilon so you can say in each n blocks you will simultaneously see a violation of the best methodology and then you are... you actually have some randomness you want there to be log 1 or epsilon of that so that whatever test you do is safe in the end. Right. Yeah. And maybe those don't need to be independent. Right. Maybe you could do log n or epsilon plus log n. Actually that's a very good point. Yeah, that's a great point. So related to that, you know, what we know is that log 1 over epsilon is necessary but the only thing we know about the log n being necessary is because you need it at least for the extractor. And... but other than that we don't really know whether you could... you know, whether you couldn't do sort of, you know, arbitrarily large amounts of... get arbitrarily large amounts of min entropy. So you get only log 1 over epsilon and then you can get any amount of min entropy from this. So that we don't know yet. Yeah. This is a question of physics. I have seen lately many proposals for quantum random number generators in this literature. They might make ten different proposals. And they all claim that some advantage over the others. But I don't think there's a clear criterion which can say that this method of quantum random number is better than the other. Do you think one can... formulate in such kind of preparing maybe this certificate ability or some other... Well, yeah. So there are certainly many, many proposals for quantum random generators. But I think the question that those are trying to... solve... you know, they're just trying to say, well, you have an intrinsic source of randomness in quantum mechanics. And now how do you get at it practically or in some interesting way? I should mention that the Pyronian at all paper, I think I forgot to say this, the one that appeared in Nature in 2010, its main point was actually... what they did was... they actually implemented this. And so they collected... you know, they set up their apparatus and they let it run for a month and they collected something like, you know, 3,000 entanglement events and they derived, you know, eventually using this scheme, they eventually managed to derive 40 certifiably random bits. And they sort of said, well, this is the first time in history that you have certifiably random bits from an apparatus about which you assume nothing. And then we don't see them. Who knows? So, just to be on the safe side, they didn't publish that string. More questions? Let's thank them again.