 Well, thanks so much. It's an honor to be here for my first crypto. We won't count another one that I snuck into when I was in Santa Barbara for another reason. But so this will not be a sort of a general talk about quantum computing and crypto, although that might have been good too, but I'm not going to talk about Schor's algorithm or about quantum-resistant crypto or about quantum key distribution. I'm going to talk instead about some brand new connections between quantum computing and cryptography, ones that you probably haven't heard about. And I'm not going to sort of give a sort of basic introduction to quantum computing, but if you don't know the subject, you can just sort of nod along, because mostly I'll just be using some facts about quantum computing or some assumptions about what is hard for quantum computers and what is hard for classical computers as black boxes. So we won't really need to know quantum mechanics for this talk, although if you want to learn quantum mechanics and you haven't, you could do it, anyone who's here could do it in a couple of days. It is really totally not scary once you take the physics out of it and just think about it as unit vectors of complex numbers. So the main part of this talk is going to be about some new work not yet published on a certified randomness from quantum supremacy. So this term, quantum supremacy, was coined by the physicist John Preskell in 2012. This was before the word supremacy, let's say, reacquired a bunch of really nasty connotations. And so there's now a lot of discussion about changing this term that we've sort of been saddled with. So one suggestion has been quantum inimitability, meaning inability to imitate something with a classical computer. Well, if you can pronounce that, you're welcome to use it. Another suggestion has been quantum ascendancy. That sounds too much like we're ascending to some quantum UFO in the sky or something. So I'm going to stick with this term. And anyway, then in a second part of the talk, if there's time, and by request, I'm going to say something about some joint work that I did with my introducer, Guy Rothblum, while in the past year when I was on sabbatical in Tel Aviv, about a new connection that we have found between gentle measurement of quantum states and differential privacy. And here I have the great advantage that Cynthia already introduced differential privacy, so I don't have to. So as you may have heard, right now happens to be a particularly exciting time for quantum computing, maybe the most exciting time since a lot of the basic concepts were first discovered in the mid-1990s. And the reason it's so exciting right now is that the experimentalists finally have qubits that work pretty well in isolation. And so it now finally makes sense to try to scale up to significant numbers of them and try to do something interesting. So a bunch of companies are now sort of in a race, Google, IBM, Intel, some startups like Ragetti, this Ion Q with trapped ions, Microsoft is sort of pursuing a longer term thing. But all of them are sort of invested on the $100 million level at actually building systems. And in particular, many of these players are right now trying to achieve the first clear demonstration of quantum supremacy, by which they simply mean doing something that we're pretty sure is hard for a classical computer to simulate. Notice I didn't say useful. I said hard. So proving what I like to say is that quantum supremacy targets what for me has always been the number one application of quantum computing, more than breaking RSA, more than even then simulating quantum physics for all the industrial applications that that could have. For me, the number one application is refute Odead Goldreich, refute Gil Kalai, refute Lee N11. I would just like them to admit that they were wrong and that quantum mechanics is right. So excuse me? Yeah, yeah, right. So basically just experimentally prove the point that you can do something quantumly, let's say in less than a second, that would take at the very least many days or weeks to simulate with a Google-sized classical computing cluster. And what is exciting is that we may be on the verge of doing that. And so they have now in both superconducting qubits and trapped ions, they have systems with about 20 qubits that work quite well. And Google actually down the road from us at its lab in Santa Barbara, which is maybe the world leader right now, is trying to scale up to a 72-qubit chip that they call Bristle Cone. I think it's fair to say that it will be working in O of one years. So I mean whenever experimentalists tell me that for certain something will be ready in a year, I've learned to actually trust them that probably there will be something in three or four years. So yeah, so it's an exciting time. Now suppose that they build the 72-qubit chip and suppose that it works, what should they actually do with it to demonstrate quantum supremacy? So that is where we as theoretical computer scientists come into the picture. I mean quite literally, we talk to the system builders and they ask us, what should we do with this? So the most basic proposal for demonstrating quantum supremacy is almost tautological. It's simply you would just sort of take some kind of sequence of gates or some unitary transformation on your qubits that leads to sampling. Let's say from some probability distribution over n-bit strings. If you have an n-qubit quantum computer. And so each time you run the device, you can get a new independent sample from some distribution over n-bit strings. And at its core, quantum mechanics is a way to calculate probabilities that you will see various outcomes when you make a measurement. So it lets you calculate what this distribution is. But of course, each time you run it, you may get a different sample. And then we can sort of conjecture that sampling either exactly or approximately from that same distribution would be hard with a classical computer. Now of course, being complexity theorists, we hope to do a little bit better than saying, I don't know, seems hard to us. And we wish to give reductions. And to some extent, we have been able to do that. So that's a long story that I mostly won't go into. But very briefly, it turns out to be very easy to specify probability distributions that a quantum computer can easily sample from such that if a classical computer could sample exactly from that same distribution in polynomial time, then the polynomial hierarchy collapses to the second level. For example, for me, that's actually more convincing than a classical computer could factor. Because a fast classical factoring algorithm, yeah, it might collapse much of the world's economy. But I mean, I think that our administration in the US may achieve that goal in much more prosaic ways. It's just not very impressive to me. What's more impressive to me is the collapsed pH. So for approximate sampling, the situation is if you currently have to make stronger conjectures than just that the polynomial hierarchy is infinite, but things that seem very plausible and which would imply that even approximate sampling would be classically hard. And furthermore, it could even give you ways to verify the samples where then you could say that under some plausible complexity assumption that doing anything that passes this verification test would be hard for a classical computer. So those are the types of tests that Google, in particular, is going to try to do within the next couple of years. The theoretical foundations of this sort of experiment, I mean, I think it goes back to a paper of Tarhal and Devonchenzo from over a decade ago, the connection to things like the infinitude of the polynomial hierarchy that came later that was independently by me and Alex Arkhipov in what we called boson sampling, which was one particular proposal for doing this, and also Bremner, Joseph, and Shepard. And then since then, we've been studying setups that are closer to what people like Google and IBM are actually going to be able to do in the near future. OK, so my line for a while whenever I gave talks about this is, well, this is exciting because Google may be able to do it soon, and because it will refute Gil Kalai and enforce the quantum computing deniers or skeptics to admit that they were wrong. Now, clearly, obviously, sampling from these distributions is completely useless in and of itself. It has zero applications to anything other than just proving the skeptics wrong. Because who would actually need a sample from the output distribution of some randomly generated quantum circuit or something? It's like if the experiment is working right, all you're going to say is a sample, well, not exactly from the uniform distribution. I mean, that would be easy to sample from. But from some non-uniform distribution, it is really hard to tell apart from the uniform distribution. You just see a bunch of samples that look like completely random gobbledygook like that if you don't know what random bits look like. OK, so the genesis for the new work that I'll talk about is simply the observation that, well, random bits can actually be useful. There are people who want random bits, particularly if you can, in some sense, certify that they are random, or let's say publicly prove to a skeptic that they really were generated randomly and not backdoored or biased. So obviously, if you have private randomness or randomness shared only by you and your friend, well, you better have that if you're going to be doing any sort of cryptography, whether a public key or a private key. Now, to get private randomness with the scheme that I'll talk about, you would need to own the quantum computer yourself. So that might be good for quantum computer salespeople, but that's not going to work for cloud applications where in the near future where we would envision a bunch of people accessing a very small number of quantum computers over the internet. So for that second scenario, well, you better not be downloading your private key over the internet as I shouldn't have to tell people here, but you could use that as a source of public randomness. And that, too, has many applications, as many of you know better than me, could be used for lotteries, for choosing which precincts to audit in an election, setting parameters for crypto systems, various zero-knowledge protocols. Also, if you want something trendier, well, proof-of-stake cryptocurrencies require a large source of public random bits in order to sort of run a lottery, effectively, where you decide who gets to add a new block onto the blockchain and then get paid for that. And the advantage of these cryptocurrencies is that they may not use 1% of the world's electricity to invert a hash function. But they do require lots of public random bits. So you may say, well, as soon as we've got quantum mechanics, what is absolutely trivial to generate as many random bits as you want? Randomness is sort of baked into quantum mechanics. That is famously why Einstein didn't like it. And indeed, here is, I hope that you could at least parse this quantum circuit. This is a very, very simple quantum circuit to generate a random bit. It just says, take a qubit that is 0, put it into an equal superposition of the state 0 and 1. That's what this Hadamard gate does. And then measure it and force it to collapse to either 0 or 1 randomly. I mean, something that can measure the polarization of a photon or a Geiger counter even will effectively have that behavior. And you can indeed buy devices over the internet that will at least claim to use quantum mechanics for random number generation. But the issue that we are worried about is, well, how do you trust it? Particularly, if you were getting random bits over the internet, how on earth do you know that they really were generated quantum mechanically at all? Let alone with a circuit like this one. OK? We're, of course, so I should say that NIST has something called the randomness beacon, where every minute they release 512 fresh random bits to the world. And they actually do use quantum mechanics as part of how they generate them. And even now, as of very recently, they apparently use Bell experiments to get a higher level of security. But their Alice and Bob were not nearly far separated enough to get through cryptographic security. And actually, it was only two years ago that the first experiments were done that violated the Bell inequality, proved the reality of entanglement in our universe in a way that closes all the possible loopholes. At least all the sane loopholes that a skeptic could think of. So that's very, very recent. It can be done today, but it's a difficult experiment. But you could ask the question, even if NIST were doing that right kind of experiment, how would anyone accessing over the internet know that? They would just have to trust NIST. And if you know about the Snowden revelations and their pseudo random standard that it turned out was backdoored by the NSA, then someone might be leery of that. So how can you get certified randomness? Well, as I alluded to, there was an earlier approach to that problem, which emerged from a beautiful sequence of works, starting with Colbeck and Renner a decade ago. And they were thinking about the famous Bell inequality, which is, well, for computer scientists, it's a two prover protocol. It is a game played by two cooperating players, Alice and Bob, who can agree on a strategy in advance, but who can't communicate with each other. And a referee generates random questions, sends them to Alice and Bob, and then demands back responses that satisfy some condition. A typical choice is that the exclusive or of the responses should equal the end of the questions. That's called the CHSH game. And then what you prove is that this game can be won with a higher probability if Alice and Bob share quantum entanglement than if they don't. And in particular, if they only share classically correlated random bits. In particular, for the CHSH game, you can show that with classical correlation, they could only win that game in three quarters of cases. Whereas, if they share entanglement, then they can win in 85% of cases. And so, but what people noticed, and once again, people said, OK, this is obviously hugely philosophically important. This is what disproves Einstein and shows that quantum entanglement or spooky action at a distance is in some sense a real thing in our universe. It has experimental consequences. It can't be talked away. But obviously, it's completely useless for anything. Except a decade ago, these people realized that it's not so useless. And the reason is that when you play this game and you win it with 85% probability, meaning when you violate the Bell inequality, it's not just that you're proving that entanglement was present. It's also that you are proving that your responses must have some real entropy in them. They must not have been fully known in advance. They must have been at least partially random. Why? Because if Alice's response were a deterministic function of her input and Bob's response were a deterministic function of his input, and they won the game more than three-quarters of the time, then you could prove that that would allow instantaneous communication between them. So under the assumption of no faster-than-light communication, you actually get, assuming that they pass this game, you get certified randomness, certified entropy. And amazingly, the argument never even needs to assume the correctness of quantum mechanics. It just assumes that you do the experiment, and they do indeed pass this test, as experiment shows that they do. So there was a problem, which is that in order to even generate the challenges to send to Alice and Bob, well, that needs to be random as well. So at first, it looked like maybe the fusion reactors that exist today, right? They generate power, but in order to run them, you need to supply them with more power, right? You know, you would need to feed more randomness in, then you get out. OK, but you can be more clever, and you can save entropy in your choice of questions that you ask, OK? And so by doing that, these guys showed how to get a linear expansion in the number of random bits that you had. These guys showed how to get a quadratic expansion, and then it became an exponential expansion. And then a few years ago, with these guys, they said with just a fixed initial random seed, you can then get arbitrarily more random bits. You know, effectively by just plowing in the new randomness you get as a new seed. OK, so, you know, so this is great because it doesn't even require a quantum computer. It needs only current technology, meaning current as of two years ago, OK? But the downside is that, well, let's say NIST is doing this, right, or Google or someone. If you're getting these random bits over the internet, how on earth do you know that Alice and Bob really were separated by far enough to be violating the bell inequality, right? You don't really know anything about what is physically going on, you know, over, you know, at this remote laboratory, OK? All you know is you can submit challenges, you can wait some amount of time, and you can then get a result, right, and then you can check it. OK, so for this reason, it would be great if we could use quantum mechanics to generate certified random bits using only a single device. OK, so that's what I'll talk about today. For this, we will need a quantum computer. But what is particularly exciting about this is that it looks like what we would need is just the bare minimum type of quantum computer that is able to do anything at all that is classically hard, OK? In other words, you know, a 70-qubit device, for example, that can, you know, achieve sampling-based quantum supremacy should already be enough to do this. And in fact, you don't even want more than that for this application for reasons that we'll see. OK, so the basic idea in the sort of scheme that I'll talk about is that we have a classical client, say, you know, who just has a deterministic classical computer, you know, and, of course, you know, I'll assume that, you know, the classical code that they run is fully trustworthy because, you know, of course, classical code is, you know, is going to be fully trustworthy and free of any bug, OK, or any backdoor. I mean, you know, I have to assume something, right? And the classical verifier will start with a small random seed, OK, we'll use that seed to generate a sequence of challenges that it sends to the quantum computer when I did a Google image search for quantum computer. You know, that's apparently what they look like. You know, I mean, if you could go down the street to Santa Barbara, I think they look a little different. But, you know, and the challenges will basically have the form, you know, just, like, give me a sample from the output distribution of this quantum circuit, OK, and the client will demand back the responses within a very short amount of time, like, let's say, half a second for each, OK, and it will wait for each response before sending over the next challenge, OK, and then the key insight here is that, you know, the verifier can then check if these responses pass some test, you know, if they pass some check for sort of, you know, are these sort of consistent with having been drawn from the right distributions, OK, and then we will argue, based on some complexity hypothesis, that if the samples pass that check and they were, you know, efficiently generated by a quantum computer, then they must have some real entropy in them, OK. So, in other words, you know, we will give a test that a quantum computer can quickly pass, but that even a quantum computer can only quickly pass by sampling from the distribution, and it cannot do it in a deterministic or nearly deterministic way, OK. So, this requires just a single device, so it's good for remote use, and it's also good for near-term quantum computers, because, you know, we just, for this, you know, we just need basically a clear gap between the running time of, you know, the quantum computer and the running time of a classical device that would be trying to fake the results, right. So, you know, 70 qubits or, you know, even 50 or 60 qubits are probably enough, OK, and I said we don't want more than that. Well, the reason is that this scheme also has a downside, which is that the classical verification with this scheme takes something like 2 to the n time, and being a number of qubits, OK. So, if n is 50 or 60 or maybe even 70, you can do that, at least if you're Google or IBM or someone like that, OK, and this is, you know, this has even recently been shown experimentally, OK, but, you know, it takes quite a big cluster a long time to do, OK. So, with 1,000 qubits, you know, the randomness generation scheme might be working fine, but you could never prove it, right, you know, so, yeah. Yeah, yeah, look, so that, so, thank you, thank you, Tal, that's my very next slide. You might wonder, does this inherently require quantum mechanics? I claim that it does, OK, and the reason, that was not a plant, the, you know, well, let's suppose that we had a classical device for generating random bits, like, you know, a lava lamp or something, but suppose that, you know, you also believe in the classical polynomial time charge-touring thesis, right, then that lava lamp or whatever it is could be simulated by a classical computer with access to a random number generator, right, in, you know, with polynomial overhead, OK, but now we have a problem, because now what would happen if I just took that random bit source that my simulation was using and I replaced it with the output of a PRG, right, now, you know, my device will still pass whatever test it passed before, but the output is not truly random anymore, right, it no longer has enough entropy in it, OK, so that is a general argument for why you cannot do this sort of thing classically, OK, so the, you know, the way that quantum computing evades this argument is that that sort of transformation of, you know, just taking out the randomness and swapping in something else, that doesn't work for quantum algorithms, right, there's no analogous notion to that for a quantum algorithm, yes, that's a good question, I know you don't for this, because we will be, you know, the assumption that we'll make will, you know, basically be that, you know, it will imply that this problem cannot be solved in quantum polynomial time or sort of takes exponential resources, even if you have a quantum computer, so, you know, so in order to fake the results, if you believe the complexity assumption, you would need a much, much, much more powerful quantum computer, yeah, you are, in fact, OK, wait, wait, wait, wait, no, sorry, sorry, that's, OK, that's going to be on my next slide, though, OK, so, OK, so, you know, notice that our protocol requires certain tasks, like finding, you know, high probability outputs of a quantum circuit to be easy for quantum computers, ironically, it requires other tasks, like finding the same high probability output each time to be hard for quantum computers, so we use both the strengths and the limitations of quantum computers, OK, now, you might say, well, you know, this, this, OK, so the protocol is going to require challenges to be sent to the quantum computer, the challenges are going to be generated pseudo-randomly, OK, now, you may say, isn't this kind of circular, if we already have a PRG, then what do we even need, you know, true random bits for, OK, but there are several advantages, OK, one is, you know, even if the PRG were to be broken in the future, well, as long as it's not broken now, then, you know, then we're still OK, so we get this sort of forward secrecy property, OK, also, even if the seed was public, as long as it wasn't known to our quantum computer at the time, then the random bits can be private, OK, and also we get this freshness property that was just asked about, OK, that the random bits were not known to anyone, even the quantum computer, before it received the challenge and had to respond to it, OK, so here's, you know, I sort of already sketched what the protocol is, but just to say it more carefully, so the classical client generates a list of n-cubit quantum circuits, say, C1 up to Ct, pseudo-randomly, OK, outputs of a pseudo-random function in a way that mimics some, you know, random ensemble of circuits that we think is hard, then for each t, the client sends the t-th circuit to the server, demands back a response, S sub t, within a short time, in the honest case, that response should be a list of k-samples from the output distribution that results from applying that circuit to, say, the all-zero state, OK, then the client, again, using its seed picks a small number of random iterations, and for each one, it solves a, it checks whether the output S-t solves a problem that we call heavy output generation or hog, OK, and if the checks pass, then the client collects together all of the outputs from the server into one big string S, and it feeds it into a well-known classical tool, a seeded randomness extractor, OK, which is able to produce from that, and again, from our short random seed, you know, a long string that is nearly uniformly random, OK, so this hog problem, you know, so the tests that we, that we check whether the quantum computer's output satisfy it, is basically just, you know, take a k-tuple of outputs from the quantum computer, each of which is an n-bit string, call them S1 up to SK, and then just calculate what each of their outputs should be, you know, from an ideal quantum circuit, right, you know, a C, you can do that if you have two to the n time, right, just check, you know, what is the probability that each of these was supposed to have been output, OK, if the SIs were chosen completely at random, then I expect each of these to be 1 over 2 to the n on average, and I expect there some to be like k over 2 to the n, OK, but if I'm really doing valid sampling, then my outputs should be biased toward the ones with higher probabilities, right, I should never see anything with, that's predicted to have probability zero, and you know, and the, you know, and as I said, the distribution is not uniform, right, in fact we know a lot about what the distribution looks like, it, the probabilities are exponentially distributed random variables, and you know, in some special cases we can even prove that that's true, in other cases we know it to be true by way of MATLAB, OK, but so, you know, so there are little fluctuations, what physicists call a speckle pattern in this distribution, some outcomes being, you know, constant factor likelier than others, and so, so now we just sum up all these probabilities and we check, well is it greater than, is the sum greater than k over 2 to the n by some constant factor, OK, with a completely errorless quantum circuit, you would expect, you know, this sum on average will be about twice k over 2 to the n, what one can calculate, OK, so we just pick some constant b between 1 and 2, and then we just do a threshold test where we say is the sum at least b times k over 2 to the n, OK, that, that's the test, that's all, OK, so then we have to make some hardness assumptions, and I'm going to warn you that the hardness assumptions that I currently know, you know, suffice are strong ones, OK, so this is my first assumption, I call it the long list hardness assumption or LLHA, so basically, you know, all right, well I wrote it here, but you know, to say it, what it says is, look, if I give you a random quantum circuit and I give you an n-bit string, you know, it should be hard for you, you know, to determine whether that string was chosen uniformly at random or whether it was sampled from the output distribution of that quantum circuit, right, sounds pretty plausible, right, OK, good, so now let me assume that that's still hard for you even if you yourself are a quantum computer, after all, you know, it is an exponentially unlikely output, right, you're not going to be able to sort of reproduce it, you know, and even if you're a quantum computer running in sub-exponential time with a sub-exponential amount of quantum advice, OK, but I'm still not done because now let's assume also you're an AM protocol, OK, so you get to send a poly-sized classical challenge to Merlin who sends back a poly-sized classical response, you know, but that shouldn't help because that's just like a proximate counting but, you know, quantum circuits are like a sharp P type of problem, right, so that's, you know, OK, but I'm still not done because you get an exponential, you actually get random access to an exponentially long list of quantum circuits and an exponentially long list of strings and now you just have to decide whether all the strings were chosen uniformly at random or whether each string was chosen from the output distribution of its quantum circuit, all right, so just assume that that's hard, OK, oh, but by the way I can prove that this is true in the random oracle model, OK, now you might say well it's not a lot of good to generate random numbers in a world with a random oracle, right, but, but, but these are actually, you know, I can, the random bits that you would get would be random even to someone who knew the random oracle and who had unlimited computation time, so it's a non-trivial statement, OK, OK, right, and then, you know, so the basic theorem that one can prove is that if that LLHA holds and we just analyze a single round of this protocol, then, you know, you know, and it passes the, solves the hog problem, then in each round it generate, you know, well, you know, you have some good amount of min entropy, so like, you know, you, the, you, the output distribution of your quantum algorithm must have at least this fraction of its probability mass on outputs with, say, at most this probability, OK, and so, so now you just have to sort of get your quantum computer working well enough that this B times Q is greater than 1, right, Q being the success probability of the quantum computer, right, so that tells the experimentalist exactly what target they have to aim for, right, you know, to get min entropy generation, OK, what, expert, like, let's say 8 to the n, OK, but, you know, not, not nearly large enough that you're going to see, like, a completely trivial quantum circuit anywhere in it, OK, yeah, but, but we're making a hardness assumption, OK, so, you know, and then, you know, the proof idea is well-made, you know, if you assume that there was a low entropy quantum algorithm that quickly solved this problem, we can exploit that to get an AM protocol that would violate this LLHA, right, and the intuition comes from, we'll just imagine that we had a zero entropy quantum algorithm, right, a pseudo deterministic one, OK, that always produced the same output. In that case, you know, you know, sort of, it really just reduces to an approximate counting type of problem, where Merlin would just have to tell Arthur, like, here is where, in this giant list, you can find strings that are, you know, the ones that your algorithm will output, and there will be a larger number of such strings in the case where the strings are being sampled, you know, than in the case where they're just being chosen uniformly at random, right, and then it just reduces to AM approximate counting, OK, so, you know, the more general case, well, you have to do more work. OK, now a second assumption that we need to make is that there are pseudo-random functions that are hard to invert even with a quantum computer. I mean, that's a pretty safe assumption, but again, I'm going to have to go a little further and say, you know, I want pseudo-random functions that are indistinguishable from random even to a hypothetical quantum computer that could make multiple non-collapsing measurements of a quantum state, OK, or if you don't like that, we could also say that are indistinguishable from random to a quantum statistical zero knowledge, or QSEK protocol, OK, so lattice-based crypto is no good for that, OK, but, you know, anything based on, let's say, some AES type of completely scrambling function, you know, I think it should be fine, OK, so, yeah, all right, and this again holds in the random oracle model, this assumption, OK, so then the main result is basically that if our assumptions both hold and, you know, the server is a poly-time-bounded quantum computer and it passes our test, then, then, you know, it must be generating at least a linear amount of min entropy per iteration, OK, so after T iterations, you get Tn min entropy, and, you know, actually the most technical part in the analysis is to show that the min entropy accumulates from one round to the next, right, like it could be that each round was individually random but they were all conspiratorially correlated with each other or something and, you know, you have to rule that out, OK, what? Yeah, that's right, the verifier runs in two to the n time, yeah, that's a good question, I would have, well, one would have to think about that more, yeah, let me not give a confident answer without having thought about it, OK, so, yeah, so I should mention that there is a beautiful independent approach to this problem in a paper by these authors recently and what they've done is actually, you know, shown how to, again, how to use a single quantum device that's untrusted to generate certified random bits, they need to assume only the quantum hardness of breaking lattice based crypto systems, so much more conventional assumption and their scheme has a further, I won't have time to explain how their scheme works, it involves some, you know, exotic types of trapdoor functions but, you know, theirs has a further advantage over mine which is they give a polynomial time classical algorithm to verify the results, OK, so what is the advantage of my scheme? Well, mostly just that it could actually be implemented in the near term, so mine, you know, you know, works sort of as soon as you have a 70 qubit device that achieves quantum supremacy, there seems to require at least about a thousand qubits, you know, to get any type of security and at that point you're talking about quantum fault tolerance, so you might be talking about millions of physical qubits and then that's further in the future, OK, so there are many open problems about the scheme, you know, can we get certified randomness by sampling with the same circuit over and over, can we prove our scheme sound under less boutique complexity assumptions, what about soundness against, yeah, you know, so the, you know, adversaries that are entangled with the quantum computer but, what I wanted to say is that, you know, this may actually be, so Google has sort of gotten excited about this and, you know, I think the reason is not that this is a world changing application, you know, we know, you know, other ways to get, you know, probably good enough random bits but, you know, this may be the most plausible application for a very near term quantum computer that we know of, OK, so, you know, it can be done with 70 qubits, you don't even want more than 70 qubits, so somehow all the weaknesses of quantum supremacy experiments have become strengths, OK, so I probably don't have a lot of time to say a lot about differential privacy, do I? Oh, I have 10 minutes, all right, good, let's go, are you ready, let's go, all right, all right, so by request, I want to say a little bit about my work with Guy Rothblum about gentle measurement and differential privacy, this is also not yet written up, all right, we've, you know, got a lot to do, so as you may have heard, measurements in quantum mechanics are destructive, right, you open the box, you check if the cat is alive or dead, well, guess what, it's not in superposition anymore, OK, so, but measurements are not always destructive, OK, if you were kind enough to ask the cat whether it's alive plus dead or alive minus dead, then it may just tell you, yeah, I'm alive plus dead and then its state is completely unaffected by that, right, so, you know, in quantum mechanics, the general rule is if you measure your state in a basis where, you know, the state happens to already be one of the basis vectors, right, let's say the outcome of that measurement would have been almost perfectly predictable to someone who knew the state, then that state is damaged only a little bit by the measurement, right, and you can keep reusing that state over and over and make many measurements on it, OK, so this is known as gentle measurement, OK, in physics, you know, it actually has many experimental applications, OK, so, more formally, given a quantum measurement M that acts on N registers, let's call M alpha gentle if for every product state, meaning, you know, unentangled state on N registers, call it row, and for every possible outcome Y of that measurement, the state of row, if I apply the measurement and get the outcome Y, right, so that, you know, the state changes to something if I get the outcome Y, but it's trace distance, this is just a quantum generalization of variation distance between two distributions, you know, our usual distance metric between quantum states, you know, should be small from the original state row, should be at most alpha, OK, that's a gentle measurement, so here's an example, you know, let's say I have N unentangled qubits and I want to measure the total hamming weight, OK, well in general, that's a very destructive measurement because the, I'll have a superposition over possible hamming weights that has some Gaussian distribution, if you don't know what a Gaussian looks like, it's that, OK, physicists would call it a wave packet, OK, but then I measure it and now I force the thing down to a single hamming weight, right, I collapse the whole thing, OK, but what if instead I ask for only the hamming weight plus some random noise of order at least square root of N, well that's much safer, right, because now it's like I'm somehow not damaging it by too much, the physicists actually know this, this is actually used in, you know, photon number measurements and this is how you do it non-destructively, OK, all right, now let's talk about a completely unrelated topic, differential privacy, right, let's say we want to protect the privacy rights of quantum states, OK, well how can we generalize the definition that Cynthia showed us this morning to the quantum case, well let's just say given a measurement M that acts on N registers, let's call it epsilon differentially private if for every product state, again on my N registers, at rho and for every rho prime that differs from rho on only a single location, a single quantum state dropped out of the data set, the probability that my measurement returns y is at most 1 plus epsilon times the probability that my measurement returns y on rho prime, right, this is just a natural quantum generalization of DP, OK, so here's another example, let's say I want to measure the total hamming weight of N on entangled qubits, well that's not differentially private, but if I add a noise term, OK, then it becomes DP as we heard this morning, OK, so what happened was I gave a talk last fall at Weitzman where I talked about gentle measurements and Guy came to me afterward and said this sounds a lot like DP and I said come on you can just, you know, everyone can connect anything to whatever they work on, right, it's probably just some random noise, OK, you know, it's self noise, OK, but then we thought about it and hmm, turns out that there is a theorem that proves, you know, I think an incredibly strong connection between these two concepts, OK, so here's what our theorem says, first of all any measurement that is alpha gentle is also of alpha differentially private, OK, so in that direction it's just completely tight, OK, in the other direction there's some more caveats but, you know, it but something goes through, OK, if a measurement M is epsilon DP and it has some, you know, special form that actually encompasses all the examples that we actually know or care about, that it consists of some classical DP algorithm that's applied to the outcomes of a bunch of quantum measurements, then M can be implemented in a way that is O of epsilon times square root of n gentle, OK, so I lose something but like if epsilon is between 1 over n and 1 over square root of n then that's still interesting, right, so both directions can be shown to be asymptotically tight, we have to talk about product states, if we talk about entangled states we can prove something but it turns out to like only really give us anything for completely trivial measurements, OK, and you might wonder about computational efficiency, well part two preserves computational efficiency as long as your DP algorithm has the property that its output can be efficiently Q-sampled, which means you must be able to efficiently prepare a coherent quantum superposition over the outputs of the DP algorithm with no noise, now most of the DP algorithms that we actually looked at do have that property, OK, but that is something we need, now this could be seen as a quantum analog of the known connection between DP and adaptive data analysis, right, the entire point there was you have this data set and you want to sort of look at it without sort of damaging it by too much, right, the difference is in the classical case the damage is just something that occurs in your head, right, the damage could always be reversed if you just forgot what you learned, right, in the quantum case the damage is an actual physical thing that happens to the state, or someone else who measures that state could see that it was damaged, OK, and this actually has had an application, so in a paper in the past stock I proved something that I called the shadow tomography theorem, basically it gives a new measurement procedure for quantum states, where if I have some unknown quantum state in d dimensions and I have a list of known two outcome measurements e1 up to em, then what I can do is I can learn the approximate probabilities that each of these measurements accepts my state, so the, you know, for each AI the approximate probability that it accepts row using a number of copies of row that grows only polylogarithmically with both M and D, OK, so even if I had a state of exponentially large dimension, so you know, like n qubits, 2 to the n dimensions, and I wanted to know its behavior on exponentially many different circuits, like all circuits with at most n squared gates or something, I can do that using only polynomial many copies of the state, OK, so that was, that was kind of cool, but now what we can do is we can use known results from differential privacy, in particular this private multiplicative weights, or a PMW algorithm of Hart and Rothblum, somehow guy knew about, and as well as our DP and gentleness connection, and we can actually improve this result, OK, we get down the dependence on log of M from fourth to second power, more interestingly we give a procedure that is online, so it can respond to the measurements one by one as it receives them, and it itself is gentle, it's, you know, after we're done our copies of row are still good to use for something else, OK, so there are many open problems, can you prove a fully general connection without the conditions, what are the optimal parameters for shadow tomography, and finally can we go in the other direction and use quantum computing to say something new about classical differential privacy, I think that would nicely complete the circle here, OK, thank you.