 away, you know, 80% of the difficulties. Okay, and so the way that we think about it is, okay, so what is an algorithm? You know, an algorithm, a digital deterministic one would just take a collection of bits and operate on them by some simple operations like maybe Boolean logic operations, you know, and or and not, like let's say some bit could get replaced by the and or the or or the not of some previous bits, okay? Now, computer science since the 1970s has been heavily interested in probabilistic or randomized algorithms, okay, which are now believed to, that they don't lead to a larger class than conventional P, but that actually has not been proven. It's conjectured that randomized algorithms give, you know, the same class P, but, you know, there are some problems for which the only currently known algorithms make use of randomness, okay? I mean, currently known efficient algorithms, polynomial time algorithms, okay? And so the way that we could think of a randomized algorithm is, well, our computer has a state at any point in time, right? The state is a string, let's say, of n bits. So there's two to the n possible states, okay? But actually, because our algorithm is randomized, really, the state can assign some probability mass to each one of these two to the n states, okay? So really, the more general state that we should think of is a vector, a vector of two to the n probabilities, right? You know, one representing the probability of every possible configuration of the bits at each point in time, okay? So we start in some simple initial state, like let's say the all zero state, and then we can do operations like flip, you know, replace the first bit by the outcome of a fair coin flip, and that would move my state to a linear, a convex combination of the zero zero and the one zero state, and then I could do something like set the second bit equal to the first bit. That will just do this evolution, okay? But every operation that I do can be thought of as some linear transformation that I'm applying to this vector of length two to the n, representing the probabilities of all the possible states of my computer, right? And then at the end, I'll be interested in the probability that I see some particular string when I look at the computer, and that is just, well, you know, I just use the usual rules of probability, so I can sum over all paths through this tray of possibilities that would lead to that outcome, and then for each path, sum up the probabilities of each of those paths. For each one, the probability of a path is just the product of the probabilities of the transitions along the edges, okay? Now, a quantum computer is basically exactly the same as everything that I just said, except that we're gonna replace the probabilities by these complex numbers that we call amplitudes. That's it, okay? So an amplitude is a complex number. We can now, you know, the state of our computer at any point in time is gonna be a unit vector of two to the n of these complex numbers, so one amplitude for every possible n-bit string. The operations that we can do are any norm-preserving linear transformations on that vector, okay? And of course, we call those the unitary transformations, right, those are the ones that preserve norm, and these simple operations, the ones that we say can be done at a unit cost are the ones that was, so let's say, act on only one or two bits at a time. In the quantum case, we call them qubits, which are just bits that can have some amplitude for being zero and some other amplitude for being one, or that can be in superposition states, okay? So a simple operation is one that acts like on only one or two qubits, and then you would take the tensor product of that with the identity acting on all of the other qubits, okay? And then an efficient quantum algorithm is any sequence of those simple operations that chains together a number of them that only grows polynomial-y with n, with the size of the instance that you're trying to solve, okay? And then any problem that can be solved by such an algorithm with a high success probability, so let's say 90% success probability or something is said to be in the class BQP that I mentioned before. Okay, and by the way, if you don't like 90% success probability, you could just keep repeating your algorithm a bunch of times and take the majority vote, okay? Make it 99.99999%. Okay, so the entire hope of getting a speed up from a quantum computer lies in exploiting the way that these amplitudes behave differently from conventional probabilities, okay? And if you've seen quantum mechanics, then you've already seen many examples of that, but the way that we would tend to think about it is that in a quantum computation, the total amplitude for some output state is the sum of a contribution from every possible path that could lead to that output state, okay? And the result is that if some paths leading to us are an output state, say, have positive amplitude and others have negative amplitude, or if they're just pointing in a bunch of random directions in the complex plane, then all these contributions can interfere destructively and cancel each other out, okay? The goal with every quantum algorithm is to sort of choreograph things in such a way that for each wrong answer, you get destructive interference among all of the paths that are leading to that answer, okay? Whereas only for the right answer or the right answers do you want constructive interference, which means all of the paths leading there have amplitudes with roughly the same phase, okay? And the hard part of quantum algorithm design is that you have to choreograph that, you have to figure out for your particular problem how to choreograph that pattern of constructive and destructive interference. You've got to do it efficiently, so with a polynomial number of simple operations, and you've got to do it despite not knowing in advance which answer is the right one, which of course would trivialize the exercise, okay? So nature gives you a very bizarre hammer here, and sort of the job of quantum algorithms research is to find nails for that hammer to hit. So at this point, I should address sort of the central misconception about quantum computing that occurs in nearly every popular article that's ever been written on this subject, okay? And what they'll say is that unlike a classical computer that just tries solutions one at a time, a quantum computer just tries them all in parallel. Sometimes they add each one in a different parallel universe, okay? And then you just somehow instantaneously zoom in on the right one, okay? Well, if that were how it worked, then you wouldn't need a whole community thinking so hard about it, right? It would be a lot simpler, okay? But that may sound to you too good to be true, and indeed it is. So what's wrong with that story is that yes, you can create a quantum superposition over all possible answers to your hard problem, like let's say an NP-complete problem, okay? It'll be a quantum state looking like this one, let's say. But if you then just measure that state, not having done anything else, the rules of quantum mechanics say you'll see each possible result with probability equal to the absolute square of its amplitude, that just means you see a uniformly random answer, okay? Well, if you just wanted a uniformly random answer, you could have picked one yourself with a lot less trouble, okay? So this is why the entire point is to do something, to increase the, get a large amplitude on the answer you want before you make the measurement, okay, that's the game, yeah. So what happens, so as I said before, Peter Schor, a quarter century ago, found an amazing way to do that for the particular problem of factoring integers, okay? And a few other problems in group theory and number theory, typically about a billion groups, finite or sometimes infinite a billion groups. That, you know, his methods do not seem to generalize to NP complete problems, okay? And, you know, whether NP is in BQP, whether quantum computers can efficiently solve the NP complete problems remains an open problem to this day. Okay, I mean, maybe not shockingly. After all, we don't even know whether P equals NP, right? So, you know, we're not gonna be able to prove in our current state of knowledge that quantum computers can't do it, okay? But what we can do is that we can say that if there is a fast quantum algorithm for the NP complete problems, then it has to look very different from any of the quantum algorithms that we know, okay? And so here's one way that you could think about it. You can say, suppose that you just thought of an NP complete problem as an abstract black box, that is, you have a space of two to the N possible solutions, you know, a haystack of size two to the N, and all that you know how to do is point to a certain piece of hay and ask, is that actually a needle, right? Okay, well, you know, classically, it's clear that you're gonna spend a long time looking for the needle, right? In this case, there's sort of no, I've given you no other structure to exploit here, right? But now, you know, you've got a quantum computer which means that you can be asking about every possible piece of hay in this exponentially large haystack, all in superposition, okay? So you do have that, okay? But the question is, is there any strategy that will quickly get most of the amplitude on wherever the needle is, okay? So in this setting, what's called the black box setting, we're like, I don't give you any more structure in your search problem than just the ability to check each solution to see, is it correct or is it not correct? We've, it's been worked out exactly how much advantage a quantum computer can give you over a classical one. So there's a famous quantum algorithm called Grover's algorithm, okay? And what it's able to do is search any list of size N, and you should think of N here as being exponentially large, like equal to two to the small N, let's say two to the little N, okay? But you can search any list of big N possible solutions in about the square root of big N steps, okay? So you can get a quadratic advantage over the amount of time that you would've needed classically. Okay, and like a one sentence description of how it works would be that, well, classically at each step, I could only, you know, if I just randomly pick a solution and check it, okay, then at each step I'm putting about one over N more probability mass on the correct solution, right? It's one over N, two over N, three over N, and so on. But quantumly there's a way where initially I've got one over square root of N amplitude on the right answer, and then I do some iteration, the result of which is to put two over square root of N amplitude on the solution. Actually it's three over square root of N, whatever it goes by odd numbers, and then five over square root of N, and then seven over square root of N, and so on, so that after only square root of N steps, I've put a constant fraction of all the amplitude on the answer that I want, okay? So it all has to do with, again, amplitudes obeying the two norm, or if you like the Pythagorean theorem, okay? So, but now that there's also a really fundamental result due to Bennett Bernstein-Bressard and Vasarani, BBBV, and what they showed is that if you don't somehow exploit the structure of your NP-complete problem, then Grover's algorithm is the best that even a quantum computer can do, okay? So even a quantum computer can only give you this square root speedup if you're only looking at your problem as a black box, okay? This was actually, Grover's algorithm was actually proven to be optimal a few years before it was discovered to exist. Okay, so this doesn't rule out that quantum computers will help with NP-complete problems. It just says if you want them to help by more than that square root factor, then you're gonna have to somehow take advantage of their structure in some very clever way, much like a classical algorithm would have to as well, okay? So for the last 20 years, there's been lots of research into, well, can quantum computers help with NP-complete problems in ways that do exploit their structure? Because if they could, then that would be like the biggest practical case for actually building them, right? So there was a major proposal for how to do that called the adiabatic algorithm that was studied for 15 years or so by my former colleagues at MIT, like Ed Farhi. I won't explain the details of this algorithm, except to say if you've ever heard of simulated annealing, it's like a quantum version of that. You start out by applying some Hamiltonian with a known and easily prepared ground state. You then slowly vary the Hamiltonian until you reach a final Hamiltonian, HF, whose ground state encodes the solution to your NP-complete problem, okay? And it's actually very easy to construct such a Hamiltonian. Basically, just for every constraint in your combinatorial problem that is violated, you attach an energy penalty, right? Some term that penalizes you in energy and then, okay, the state that minimizes energy will be the state that violates the smallest number of constraints. And then there's this adiabatic theorem which says that as long as you transition slowly enough from HI to HF, you must end up in the ground state of HF, therefore solving your NP-complete problem. So from this point of view, the only question is how slowly do you have to change the Hamiltonian? Okay, and there were early hopes of this group that this was just going to solve NP, right? But it all depended on what was called the minimum spectral gap between the Hamiltonians, which I understand is a type of quantity that also arises in QFT, okay? But so as you vary the Hamiltonian, you look at the gap between the smallest and the second smallest eigenvalues. And however small that is, you need to use the inverse of that amount of time or like the squared inverse of that much time. Okay, so Farhi told me the wonderful story that he went and launched to an expert in condensed matter physics because they have generations of experience calculating spectral gaps for their own reasons. And he said, look, based on your experience with similar physical systems, do you think that this spectral gap with our algorithm will decrease polynomially or exponentially as a function of the number of particles in the system? And the guy thought about it and he said, I think it will decrease exponentially. Okay, now that was not the answer that Farhi wanted to hear. That means that it would take exponential time. So he said, why? What's the physical reason for it? And the guy thought about it some more and he said, well, it's because otherwise your algorithm would work. Okay, so what I love is that after you've seen enough different examples of nature is almost letting you solve NP-complete problems but not quite, you might be tempted to turn things around and say, what if we just took no super search, no fast solution to NP-complete problems as a basic principle alongside no superluminal signaling, the second law, and then said, what does that principle imply about other issues in physics? Yeah, this is an example. Okay, so there has been a lot of work done in recent years on understanding, okay, so if the adiabatic algorithm doesn't always solve NP-complete problems efficiently, then does it at least sometimes solve them efficiently? Does it at least give a speed up that's better than Grover's algorithm? Does it at least ever do better than classical heuristic algorithms can do on the same problems? And a short summary of the current state of knowledge is it's a mess, okay? You can construct examples where just about anything you would like to have happen does happen, like where a adiabatic algorithm outperforms classical algorithms where the classical algorithm outperforms the adiabatic algorithm where they both get stuck and neither works. And what type of behavior is gonna predominate in practice when we might have to build some quantum computers and test them out before we really know? Theory and numerical simulation have only been able to get us so far. Okay, which brings me to question number four, is quantum computing actually realizable in our world? For some reason, this question keeps coming up when I give talks. So I won't, of course, it is only the actual building of the devices that in the end can tell us the answer to this question. But the point I wanna make is that the idea that yes, you can build a quantum computer and yes, it solves all the problems in BQP. This is the boring view. This is the conservative view. This is the view that sort of just takes the known laws of physics for granted and just sees what are their implications for computing. There are some skeptics who believe that quantum computing can never work. If it can never work for really a deep reason, not just because of the funding running out or something, but if there's a deep principle that prevents it from working, then I would say that is a radical change to currently understood physics. That itself would be a revolution in physics and I would happily accept that in place of a mere success in building a quantum computer. But we're actually going to get more information relevant to these questions very, very soon. You may have heard that there are now efforts underway all over the world to actually try to build these things. So Google is right now building superconducting chips with like 50 to 70 qubits. IBM is also building superconducting chips with about that number. There's people doing trapped ions. I think a few dozen qubits so far. There's a bunch of startup companies trying different approaches, Cotonic qubits, Microsoft is trying an even speculative, even by the standards of quantum computing approach, which is called topological qubits. They haven't managed to make even one of them yet. But if topological qubits can be made to work, some people think that that's really the way to scale it up. But in any case, Google is going to try as early as this year or so to do the first demonstration of what's been saddled with the unfortunate name of quantum supremacy. It just means doing some well-defined task with a quantum computer much faster than we think you could have done the same thing with a classical computer. Notice that I did not say a useful task. In the beginning, these are very unlikely to be useful tasks. These are likely to be proving the point or rubbing the skeptics' faces in the reality of quantum mechanics type of tasks, that if you don't believe that quantum computing works, well, then you do this with your classical computer in anything like a similar amount of time. So a lot of the work that my group has been doing over the last decade has been exactly about the theoretical foundations for this kind of experiment. What should Google be doing once they get their chip working well enough, and the Q-bits are performing well enough? How do they test it? How do they know that they've done it? How do we know that it's hard to do the same thing with a classical computer? And then if we can do this, is it immediately useful for anything? I have a proposal, actually, that it looks like it would be useful for generating random bits and then proving to a skeptic that they really were randomly generated. So I had a fifth section of the talk about the stuff beyond quantum computing. Unfortunately, I'm out of time now, so what I think I'll do maybe is I'll take questions and if anyone wants to ask me about this stuff, then they can ask me about it. All right, let me stop there, thank you. Are there any questions? Yeah, yeah, yes. Well, I mean, there are many questions that you could ask, right? Okay, all right, the question is why should we think that the main difficulty lies between NP and P as opposed to just within P, basically, right? So the short answer is yes, there are tons of difficulties within P. I think I once used the analogy, right? It's like to put a problem in P, it's like Achilles or whatever, winning the Trojan War, right? It was like, you know, you're Odysseus or something, you still have to get home after that, right? There's still, you know, there's a whole other book about that, right? So, you know, I mean, you know, too, so what's happened historically in cases like primality testing, linear programming, Markov chain Monte Carlo algorithms is that often a problem was first shown to be in P but like with an exponent of six or something like six, eight, 10, right? Something fairly impractical like that. And then people whittled down the exponent, right? Because once you've got an end to the six algorithm, then usually if you care enough, it can be end to the five, right? And if you care enough again, it can be end to the four and so on. This has been our experience at least in, you know, decades of algorithm design, okay? So once a problem is in P, you know, the exponent, you know, is often kind of negotiable, right? I mean, you know, there might be limits. Like there are problems that are conjectured to inherently require n squared or n cubed time, right? But I should mention that we don't know how to prove those kinds of statements either, right? And the difficulties and let's say proving that a problem is inherently n squared look very, very similar to the difficulties in proving that an NP complete problem inherently requires exponential and n difficulties, right? So whichever starting point you take, you're gonna end up, you know, facing the same difficulties anyway, if you wanna prove these things are hard. Yeah, there are such problems known. The only problems where it's gonna be known to require n to the 100 will be extremely artificial ones that were constructed just for that purpose, okay? And, you know, the overwhelmingly more common experience is that once a problem is in P, then you can get the exponent, you know, down further and further and further, depending on how much effort you put into it. Anything else, yeah? Yeah, all right. So, all right. Well, let me at least give you my jokie examples, all right? So my first one is the relativity computer, right? And this is, you know, so instead of the quantum computer, right, this is a much, much simpler idea is that, you know, you leave your computer working on some hard problem, like an NP complete one, you leave it on Earth, you board a spaceship, you accelerate to relativistic speed, you decelerate, you know, now in Earth's frame of reference, billions of years have passed, civilization has collapsed, if it didn't already collapse in 2021 or whatever, you know, and, you know, all your friends are long dead, but if you can find your computer, you know, in the rubble somehow, you know, and it's still running, you can get the answer to your hard problem, right? So, you know, to me, this raises the question of why doesn't anyone try that, right? I mean, if you're worried about your friends, you could just bring them with you on the spaceship, right? But, you know, what we're really asking here is, is there anything deep in physics that would, you know, say that this doesn't work? I think the interesting answer is that there is, and it has to do with energy, right? How much energy would it take to accelerate to the relativistic speed that's needed? You can calculate that if I'll, you know, assign it as homework, maybe, that if you want an exponential speed up, you're gonna have to get exponentially close to C and that's gonna take an exponential amount of energy. Okay, and so then, you know, are we just trading one resource for another? Like, you're just, you're gonna need exponential time to just fuel up before takeoff, right? Or else, you're gonna need some very, very, very compressed fuel, but that brings up difficulties of its own. You know, as we see from another example, which I call the Xeno computer, right? You could, so like, people ask, why couldn't you just build a computer that does the first step in one second, second step in half a second, the third and a quarter second, and so on, so that after two seconds, it's done infinitely many steps. Okay, and in some sense, you know, there are people who do try this. There are people who overclock their microprocessor, right, and they just run it faster and faster, but some of you might know the difficulty in doing that, which is if you run your processor too fast, it's gonna melt. That's why computers have fans, okay? But, you know, again, we're asking, is there a fundamental physical barrier here? And I think the answer is, as far as quantum field theory can say, there is no barrier. You could just keep running things faster forever. Okay, but if you got down to, let's say, one step per plonk time, 10 to the 43 steps per second, you know, which you could imagine as just like your computer at this point, you might as well imagine that it's now a photon just bouncing back and forth between two mirrors that are one plonk length apart, you know, 10 to the minus 33 centimeters, something like that. At this point, there's so much energy inherent in that computation that your computer exceeds its own Schwarzschild radius and collapses to a black hole, okay, which I've always liked as nature's way of telling you not to try something, right? But, you know, there are lots and lots of, and long story short, there are lots of other proposals in the literature for, you know, what I call hypercomputer, you know, what is called hypercomputers that could solve NP-complete problems or even the halting problem in finite amounts of time. My personal view is that all of these proposals just sort of get their purchase by ignoring quantum gravity, you know, ignoring the existence of the plonk scale where sort of, you know, we, you know, it seems that, you know, the continuum view of spacetime is going to, has to break down. And, you know, if you believe the results of Beckenstein and Hawking and so forth and any bounded region of spacetime should be associated with only a finite number of qubits, you know, a finite dimensional Hilbert space. And if that's true, then that would then take us back to the class BQP or would seem to. So yeah, oh yeah, that's what a black hole looks like if you haven't seen one. Oh yeah, so, you know, even, you know, quantum field theory, there are now some detailed studies of what is the computational complexity of simulating some interesting quantum field theories such as the five fourth theory or topological quantum field theories. In every case that was like well enough to find that people could actually analyze it, the result that they got was that even quantum field theory can indeed be efficiently simulated by a standard quantum computer, so in BQP. So it's not enlarging your class beyond BQP. I should say that it is an open problem to extend that conclusion to the full standard model, okay? And I think chiral fermions are there, the issue there. Then quantum gravity, who knows? Okay, well all right, I did have a slide about ADS CFT, doesn't seem to change the power of quantum computing, but can someone make that rigorous? I think that's a wonderful challenge for anyone into this stuff. Yeah, absolutely, so yeah, so all right, so I would say that in theoretical computer science we really love making excuses for failure, right? And we're really good at that, right? It started with you wanna solve this problem, these NP complete problems efficiently, you can't do it, right? But then NP completeness came along and said okay, you don't have to worry. It's not because you, this or that individual is just too stupid, right? It's because you've now plugged into this metropolis where just to solve your problem would require solving these 10,000 other hard problems as well, right? So that was a very good excuse, but then the question was can we prove P is not equal to NP? And then for that too, we have excuses for our failure to do it, right? And then you can iterate that several more levels, right? So I mean, you know, theoretical,