 All right. Thanks, Frederick. So this is how many previous board lectures have there been, really, just board lectures? Any? I'm the first? Third? 30%? 30%? OK, good. So I'm giving one lecture, which has nothing to do with the other three sessions that I'm going to be running. And that's this one. And it's going to just be a high-level introduction, basics of complexity theory and quantum complexity theory. So I'm going to explain. Some of you may already know some of it, but I'm going to explain in sort of physicist language what P, NP, BQP, QMA, NP complete, what these words mean and some of the ideas behind them. That has little to do with practical, many-body computational physics. But I think it's interesting, and it's sort of nice background to understand. And then the tutorials that I'm going to run are going to be in the lab. And they're going to be much more hands-on on using Python for doing sort of scientific applications. And mostly, we'll just explore a few different projects with it. So the one this afternoon will start with real basics of Python, sort of syntax indentation, things like that, which I'll try to go through quickly, because I know you guys have been using it for the last three weeks. These probably should have been scheduled earlier in the school. So I'll go through that quickly, and we'll end up sort of, each of these will end with some amount of time for you guys to try to do tasks which extend the work notebooks that I'll work through with you for the first half. So that's how we'll do that. Those will be in the computer labs. Good. So with that, let me just start. This is my crash course introduction to this soup of PNP, et cetera, for physics people, not computer science people, hopefully. So I'm going to be less formal than I would be if I was really defining this rigorously. But hopefully, you'll get all the ideas. And we'll be able to do all of complexity theory in an hour, hour and a half of the plan. So before I start, let me just say there's a couple of really good references for people who want to know more. So a really great textbook, which is mostly classical by Aurora and Barak called Computational Complexity, a modern approach from a couple years ago. This is sort of mostly, but not entirely, classical. There is a review article called Quantum NP, a survey from a few years ago. This is mostly quantum, not surprisingly, which is Ahronov. It's on the archive. And there's some lecture notes which cover part of what I'm going to talk about and cover some other connections to statistical physics, which actually Roderick and I wrote a few years ago. So these are lecture notes, 1635. Actually, I'm not going to talk so much about what's in those notes, but if you're curious, that's more about typical rather than worst case complexity. So forget quantum for a second. What is complexity theory? The goal is to classify the hardness of problems, which is a lofty goal, because the problems you have to define the problems are. It's not to classify the algorithms themselves. It's to classify the problems. And the primary method through which this classification works is we classify them essentially by the resources required to algorithmically solve large instances of those problems. And what results, so the bottom line, is that you get, with the right definitions, which we'll go through, you get a robust classification scheme for how difficult things are. But it's a classification scheme of what? It's a robust classification scheme of the problems. Sorry, robust classification scheme of based on worst case behavior up to polynomial overheads. Hopefully, all of this will become clearer if it's not. And then just historically, so classical complexity theory, as I'll explain it, like what P, NP, et cetera are, really dates back to the 1970s. That's when all of the definitions of NP and NP completeness come from. And the quantum variation of it is the 2000s late. It starts in the late 90s. And one nice thing about it is that once you believe it, it provides arguments which limit the power of any kind of computer. So if you believe some of the hypotheses that we'll go through, then essentially you would say, I don't think quantum computers will be able to solve anything interesting. Or maybe you'll say, I think they will. But you'll be flying in the face of adversity, which is 40 years of computer scientists giving up on hard problems. So that's sort of the structure. So in order to get into it, I have to define some things. It's the computer science way. I don't know what backgrounds you guys have, but computer scientists are very mathematical, a lot of them. So they like defining things. And there exists and stuff like that. Physicists often don't like defining things. They just like doing stuff. But I'll try to thread the needle on this one. So let's see. So luminaries. To simplify our lives in order to formalize these definitions, what the bulk or sort of the main complexity classes focus on are what are called decision problems. So a decision problem is what? Say it again. Did you say yes or no? Yes. Decision problems are yes, no questions. They only have one of two possible outcomes. And let me just actually put over here my example board. Example board. So what's the example? Well, one example is does a Hamiltonian H have a state or a ground state with energy less than e? That's a yes, no question. Of course, you might say, well, what I really want to know is what's the energy? So why do I care if I can answer if it's less than something? And that's a fair criticism. But in fact, what you can usually do, and the reason we can focus on these, is if you have some magical box which can answer this question directly and say it's yes or no, it's less than e, then of course you can just sort of ask it, well, is it less than 100? And it'll say no. It's less than 1,000. It'll say yes. And you ask, is it less than 500? And you can zoom in on what the actual energy is. So although this looks like a major sort of limitation of the classification scheme, in fact, for most purposes you can do what's called decision to search in order to answer the question that you actually wanted to answer, like what is the energy to whatever accuracy you want with no real additional overhead. It's not that hard to do a search by bisection. So that's why we focus on these decision problems. So then that's an example of a problem. Problems have instances. Decision problems. Problems have instances. Just because I don't want to write an instance of that, in particular, let's give another example. This one is called the divides problem. And the problem is does an integer a divide b. That's the problem. And the instance, or an instance, could be does 5 divide 15. Pretty clear. Once we know what, OK, so we have problems. And we have instances, which are specific instantiations of the problem. And instances have a size. So the size of an instance is number of bits or symbols. You can use an alphabet with more than 0s and 1s if you want. That'll only lead to constants in your formulas. Size of an instance is number of bits needed to specify that instance. So what's the size here? Well, I could specify this instance, assuming that we're sort of working in the framework of solving divides problems, by telling you a pair, 5, 15. And how many symbols do I need? I have a mic. You guys don't have a mic. You have to speak a lot louder. Three figures. I like three figures. I might need to tell you the comma as well. So just to tell you where they're separated, otherwise it could just be a sequence of digits. So maybe I could say the size of this is 4 or even 6 if I include the parentheses just for fun, though they don't tell us very much. Let's say 6 symbols from some alphabet. I said number of bits over here. Of course, if I'm using a computer, I could encode this with ASCII, for example. And that would be 8 bits. So that would give me 48 bits. But this compared to that is just a constant factor. So what really matters is the number of digits in the specification of this. And in general, the number of digits up to this constant, so I'll put an O there to represent that constant, is what in terms of A and B? What's the number of bits I need to describe a number, an integer A? Plus 3 or something. But the big part is this. As I make this instance bigger, this is how I measure it. So N is this thing, which is log A plus log B. An algorithm is a computational recipe or procedure for solving or deciding instance. And we're going to measure the, oh, I'm almost out of space. Can you guys see down here? We measure the resource consumption by asking about the, we can measure the efficiency. Can you guys see that? Measure efficiency by scaling of, say, time or memory input size. So let's think of an algorithm for divides. Does anybody know one? So suppose that I gave you a particular instance. Do it over here. Give myself all the space. So here's an algorithm. Or well, here's a question. Does 63 divide, I picked a big number, 321993. So how can you answer that question? Hm? We can start to divide it. OK, so how do we divide it? The algorithm, I think you all know. It has a name. It's called Long Division. It was one of my favorites in lower school. And what do we do? OK, really, imagine that I'm like this. OK, so what do we do? We have 63. We want to divide it into 321993. I know it's amazing how age decays your memory of how to do these things. No, you divide A into B. A into B. OK, I heard a 5. I'll take a 5. Where do I write it? Up above the 1. OK, there's 5. And then what do I do? I multiply, right? I take that. I multiply it here. I get 315. And then I draw a line, and I subtract, and I get 6. And then I pull down the 9. And I go, hm, how many times can I get 63 into 69? And I say, oh, 1. Right? And then 1 goes into 63. And I subtract, and I get a 6. And I pull down the 3. And how many times? Sorry, I missed. Like that, 9. That looks like it'll be another one. 63, 6, all the way down, 3, 1. Multiply 63, subtract 0. No remainder. So does 63 divide 321,993? Yes. OK, so this algorithm is called Long Division. And the reason I did this out for you is because you can see something from it. How many resources did it consume? Well, one thing, it mildly bruised my knees. That doesn't go into the asymptotic estimate, though. So what goes in? Well, let's look. How many numbers do we have to write? So going this way, well, let's go this way. This way is easy. How wide is this table I made? Well, it's 6, which is the log of B. Six digits is log B. And if you look at it, you'll see that basically, for each of these, I got another line in this direction or two lines in this direction, which means that this is also log B. But this is also kind of a diagonal table. There's nothing out here. So in the language of our tutorial for later today, this is a sparse array. But this distance is log A, because it's basically one digit times something with log A digits. So I always get something with log A digits. And that's why this is log A wide. So what's the resource consumption here? Something like, I had to do log B steps. But in each of the log B steps, I had to do log A work to get each of the digits. So this is O log A log B. And we just say for simplicity that this is O n squared. Because n, the input size, was, so if we say the two numbers are about the same number of digits, this would be n squared. Of course, we have a more detailed estimate, but it's no worse than the square. So that's not a terrible algorithm. That's why we can learn it when we're pretty young. And this tells me that the division problem, because this is only polynomial, is easy. We'll define that in a second. The decision problem divides as easy. But let me just point out that that's because I showed you an algorithm which did it. I could show you a really stupid algorithm. Let's do it. I like stupid algorithms. So does anybody know another algorithm for answering this division problem? Exactly. So that algorithm, silly algorithm, is also known as guess and check. I kind of feel like I should write this backwards, right? Like toys are us. The R is always back. Anyway, so guess and check. So what do I do? Well, I'd say, well, 63 times 1 is equal to 63. 63 times 2 is 126. 63 times 3, I can't even do that. That's 189. And then, eventually, I would discover that 63 times 5,111 is 321,993. And because I hit it, I would stop and say, aha, divides. And if I went over, I would stop and say, well, it can't be any bigger, and I didn't hit it. So I can answer the question from this. How much work is that? I have to do roughly, well, this distance is basically b divided by a. I have to do it b divided by a times if I do it as dumb as this. And I'll put an o there. And b divided by a is, in terms of the input size, which is logarithmic, is exponential in n with some, I don't know, logs of a and b and so on. But it's exponential in n, right? That's true. So that would go in here. This would be log b, something like that. Of course, that's true. And if I went through, I'd get some n dependence here, right? Of course, the exponential is clearly the most important part. So the fact that you saw this exponential, I mean, this I managed to do on my knees at the board. This I only managed to do by the magic of being a lecturer who can draw little dots, right? So exponentials, the main point of this is that this will take a very long time, right? Despite the fact this algorithm exists, it doesn't tell me anything about how hard the problem is. It's the fact that I showed you this algorithm, which tells me that divides is an easy problem. That's how I'm going to make my classifications. So let's, from the ridiculous back to the sublime, or maybe it's also ridiculous. So the hypothesis says, and I'm paraphrasing, is actually a strong church sharing hypothesis. But up to polynomial overheads, reasonable classical computer can simulate, this is my paraphrasing of this hypothesis. So one comment, of course, this can be formalized, it can be and has been formalized. You tend to need things like Turing machines. So it's usually stated in terms of Turing machines as a reasonable model of a computer. But this is not so important, actually. What's interesting about it is that, well, it's quite a strong hypothesis, but what's interesting about it is that it tells you that as soon as you're willing to not worry about the difference between n and n squared, all computers are the same. This is a terrible attitude once we start doing numerics in the next session. But from the point of view of complexity theory, n and n squared are the same thing. e to the n, however, the silly algorithm, that's exponential overhead, not polynomial overhead. It's not the same. So if any computer can simulate any other, then if I can write an algorithm which is polynomial time on one computer, then the other computer can simulate it in still polynomial time. That's why this is an interesting comment. But it is a hypothesis, so you can check it, can be checked in various models. And what I just said is it makes the classification scheme independent CPU or particular hardware. So your younger brother or sister can do this in polynomial time just like your laptop can. It'll be a big constant. And it also suggests this is almost a physical statement. And there is actually a somewhat stronger way to state it, which makes it a physical statement, rather than just something about models of computation, which is to observe that ultimately anything which computes is a physical device. And we can, to some extent, view any physical device as being something which computes. It goes from an input state to an output state. We might not know exactly what problem it's solving, but it is computing in some sense. And so there is a physical or quantum. These are two slightly different things, church-turing, which is replace classical by either physical or quantum. So I'm hedging a little bit. The simpler one is to say quantum. And then we say, oh, quantum computer. Is any quantum computer? Can simulate any other quantum computer? Or the physical one is to say, well, maybe there's even more than just quantum. Any physical device can simulate any other physical device, up to polynomial overheads. It is, on the other hand, believed. And you guys just heard, Matthias give a lot of lectures about this topic, I believe, that classical CPUs or computers can't efficiently, efficiently means up to polynomial overhead, simulate a quantum. Or I don't know. Maybe Matthias told you the opposite since he made you simulate with your classical computer phase estimation or something like that. So do you believe this statement? Let's take a poll. Who thinks that classical CPUs can efficiently simulate quantum computers or quantum devices? And who thinks they can't? Who thinks they cannot? Who has no opinion? So if you believe they can or you believe they cannot or you have no opinion, you're at least alive. And then there's a large number of you who are apparently dead. And I don't mean that in the Schrodinger-Katz sense. So believe the classical CPUs cannot efficiently simulate quantum. The reason why is this believed? Believed by whom? By most people. The reason that this, except maybe D-Wave people. But the reason that this is believed is basically because I can't even write or even read, forget about write, a wave function in poly time because it's got exponentially many amplitudes. So that's really the reason ultimately the simplest reason why we think this. But of course, if it were true that classical things really can up to polynomial overhead simulate quantum, that means that wave functions are just a very, very poor way. It's our dumb, lost in the darkness of the 21st century way of describing quantum mechanics. And there's a better way to do it, which is more efficiently computable. I don't know. That's philosophy at the moment. All right. So with that background, let's go on to some classes. I've been talking 45 minutes. Something like that, 40 minutes? OK, perfect. So let's start writing some complexity classes. Does anybody know what the first one I'm going to write is? Easy, which has an obscure name. It's a single letter. Yes, I believe that's the 16th letter of the alphabet. Anyway, P. P. So what does P mean? A decision problem is in P if there exists a little backwards E. A polynomial time algorithm for deciding it. So indeed, this is easy. I'm going to put it here. Here's P. And we have lots of examples that we've already talked about. So one example is divides is in P. And the algorithm is long division. That's high notes in P. Another example, call this the configuration, whatever, energy. It's a good enough name. So does configuration sigma rising Hamiltonian less than P? Why is this in P? First of all, it's a yes-no question. So it's a decision problem. Why is it in P? You just check it. What does that mean? That's right. So here, the algorithm was long division. Here, the algorithm is arithmetic. It's exactly right. I've given you the n spin, or say that this has, OK, so n is something a little bit dangerous in this lecture, because I said input size is n. But if you think of n as being the number of spins in this Hamiltonian, then the input size, here's n values. Here's n squared couplings. And here is some number that I might give you in some fixed precision. So that might be just another constant. And so I would say the input size is actually n. Input size is, let's say, the number of spins as little n would be of order little n squared, because that's the biggest thing. I'm ignoring all the linear pieces. And that's, indeed, I can just evaluate this energy function and check that it's less than e with arithmetic and polynomial time. OK. OK, easy. What's the next class? Let me give myself a little more space. I'm just going to erase easy. So NP. What does NP stand for? It's a general murmur, but OK. It doesn't stand for not polynomial. I heard some people say nondeterministic, so I think some people already know what I'm talking about here. I personally think that what it stands for is verifiable and polynomial time. And you'll notice that there is no n anywhere verifiable in. Yeah, the in. The in has an n at the end. So that's where the n comes from in the acronym. So a decision problem is in NP if, and this is where it's a little bit more complicated, there exists a scheme by which yes instances can be verified as yes instances in polynomial time by a classical. So the first observation about this is that it's actually a completely non-obvious definition, but we'll try to unpack it a little bit. The second is that it is asymmetric. Between yes and no. I need to be able to prove that a particular question, if it has a yes answer, I need to be able to prove to you that it has a yes answer efficiently. But if the answer is no, I don't need to be able to prove it. And actually, that asymmetry is weird. It is real. And a lot of problems are much harder if you say you need to prove no as well as yes. It's a much bigger complexity class. So there's a picture which goes with the NP, which I think is very helpful. So put my comment here. So one, there's an asymmetry, asymmetry, asymmetry between yes. Now that looks like a symmetry, asymmetry. OK, a symmetry between yes and no. And here's the picture. This picture is helpful. What do I need to have to have an NP problem? I need to have a game. So there's a guy. He's magical. So he has a hat. Here he is. This guy, he often goes by the name of Merlin. Although I have to caution you, there's another complexity class which is called Merlin Arthur, MA. And this is not quite the same. But it's still the same picture, roughly. So here's Merlin. He's magical. He has a magic wand. It produces little stars when he waves it around. And more importantly, it produces a proof that a given instance is a yes instance. He can magic it up. And then there's a box, which is called a verifier. And this is a polytime classical computation, polytime classical computer or algorithm. And the box spits out accept or reject. And the point is that if there is a yes instance, then there exists a proof that Merlin can come up with, that he can come up with, and the verifier would accept. And if it's a no instance, then anything that Merlin says, he would try to prove it to the verifier. And the verifier would say, that's not a correct proof. I don't believe you. Reject. That's what a verification scheme is. So the question is, let me give one example and then answer your question. So here's my, I'm going to go back up here. Yeah, up here, I guess. I'm used to having somewhat larger boards. So what's my example of an NP problem? Does the ground state of H equals this same Ising Hamiltonian that we were talking about have energy less than E? OK. So you're given this Hamiltonian. Do you want to answer this question yes or no? So suppose the answer is yes. If it's yes, then there exists a state, the ground state, such that the energy is less than E. And actually, let me write it not so formally. If yes, then you, Merlin, this guy, Merlin gives the verifier a state sigma with energy less than E. So he can give the verifier that state. The verifier does the computation. In fact, it's the computation that we talked about just a minute ago in P and verifies it. In fact, this state has energy less than E. So that means that he can prove it. He can say, oh, yes, of course. This has a ground state less than energy here. Here's a state. Check it yourself. If no, Merlin can't prove that. There's no such state. So no state has that property, has energy less than E. So any state that Merlin hands the verifier, the verifier will evaluate the energy and say no, reject. It's not low enough. So there's a proof. When there is a yes instance, there's a proof that will be accepted. And everybody would agree it's a yes instance. When it's a no instance, I can't prove it. So it could be that Merlin's just dumb. He's not as smart as we made him out to be. And he couldn't come up with that ground state. But that's not the statement. The statement is that there exists a scheme because he's magical by which he could prove it if there was a yes instance. OK. It's really close to the ground state. So it means there's only one state that has lower energy. Nobody actually has. Look at this. I never actually asked what the state is. So I just have to be able to say, yes, there is a state with an energy less than this. And actually, to be an NP, I don't have to have a computation that can produce the state. I only need to have a computation which can check the state if someone else produces it for me. OK. And so what if Merlin had just used the ground state? And then you can show that, OK, to prove it's a ground state, it's not supposed to be like it's going to the side. How? That's brilliant. So if he gives a state, so you're saying, why can't we prove the no instances? So it's very hard to prove there aren't lower energy states. Because you have to rule out. You have to say they're all higher. Here it's easy to prove that I've got a state below a threshold. If someone, I mean it's not easy to come up with a state, but it's easy to prove it if someone gives me the state. That's the point of the definition. There's another way to think of it actually, which sheds some light on why it's such an interesting definition. So let me just write a few comments. What are a few comments? So one, P is a subset of NP. Why? Because if something is actually easy to decide, the proof is just go and decide it. You don't even need Merlin's help. So it's easy to verify anything which is NP. Therefore, this looks like this. As a Venn diagram, there's NP. It's bigger than P. So verifier just decides. Doesn't need Merlin's help at all. Two, anything outside NP, any decision problem outside NP, is a technical term, which I think actually might have been first in some slightly different language expressed by Dante. Hopeless. Why is it hopeless? That's right. So I could be Merlin. I know I look just like him, right? See the? OK, so and I could just say, oh, yes. And then you guys would say, where's beef? Anything outside NP is hopeless. You can't check it. You can't convince anybody. You could be, but nobody cares what you think, because you can't convince them anyway. So NP is actually, in that sense, is an enormous class. This is also any problem you've ever been asked on a homework assignment. It's in NP, because you were able to convince your teacher that you solved the problem. It's a really big class. There is an open question. Does P equal NP? This is a clay prize, clay question, clay institute, which means that if you solve this problem, say you showed it one way or the other, I think you'd get a million bucks. But actually, that would be small potatoes, because particularly if you showed P equaled NP, you'd be able to solve all kinds of problems. You'd be worth a lot of dough, way more than a million bucks. And because that would mean that any problem in this enormous class you would now have an algorithm for, which you could efficiently solve, use to decide. That's why it's a big question. So I'll just put over here. It's the most practical part of the talk. And then my last comment. So what, in fact, most people believe is that it's not true. So usually, you assume P not equal to NP, which is to say that it looks like this as a Venn diagram. And then this will lead to hardness results. That you'll be able to show particular problems are hard to solve. You can't create an algorithm that could efficiently solve them, because if you could, this would be false. P would equal NP. So because most people believe P isn't NP, because that would be too good to be true, in fact, we believe there are lots of specific problems which are also hard. Yeah? So actually, you know what you know is that there exists a problem that you can't solve even if you have to write that. There could exist. That's right. So let me come back to that. That problem, if you could do it, wouldn't solve this problem. So factoring is not what's called NP-complete. And I actually want to get to that by the end of class, and then we can discuss it. So rather, so now I have two choices. I can either talk about NP-completeness, which is where a normal classical complexity introduction would go. But I think I'm actually going to define two more classes, because you guys all like quantum mechanics. So let me, actually, is that true? How many people like quantum mechanics? Yeah, pretty harsh mistress sometimes. So there's two more definitions I want to give. These are quantum complexity classes, BQP. So a problem in decision problem L is in BQP if what? Any guesses? There's a Q and a P. So that suggests it's just the quantum generalization of polynomial, right? So this is L is in BQP if there exists a poly-time quantum algorithm, which decides L with Q quantum polynomial. This is quantum easy. Do I need the bounded error rate? Yeah, basically. So almost any quantum algorithm I imagine I'm at the end of the day going to do a measurement, and I'm going to get an answer with some probability, the correct answer. And when I say a bounded error rate, that means that the probability that it is the correct answer, yes or no. I'm just doing a measurement, yes, no, at the end. The probability it's the correct answer is say, and that means that if I run it multiple times and I keep getting except, sorry, yes, yes, yes, yes, maybe every so often I get a no, I can quickly convince myself to any desired confidence that the actual answer is yes because of this boundedness. So it has to be an error rate below, say, a third, something like that. If it was a half, it'd be useless. If the error rate's a half, that means you might as well flip a coin. It doesn't tell you anything. But so long as it's away from a half, bounded away from a half, then you can do it multiple times in order to convince yourself, yes or no. So that's why I have to include it in my definition. And I'll just write that. So got to allow errors because measurement is probabilistic. But because it's bounded, implies repeat a few times, whatever, repeat k times, improve your confidence to any desired threshold, basically exponentially in k. Improve, yeah, repeat, let me not actually write anything. Repeat, improve confidence to any desired tolerance. So in particular, quantum computers can do anything classical computers can. Classical computers are also quantum. So BQP contains p. It is not clear what its relationship is to NP. So generally, we could draw something like that and say this is BQP. And some examples, the key example, factoring, which, OK, I'm not stating as a decision problem, but you can basically do it. It does this thing have a factor less than some threshold. And then you search on that, but whatever. Factoring is BQP. And this is, of course, Schor's algorithm, 1994. It's unclear. This is what you commented, where factoring is. So factoring would be a point. It's a problem. These are sets of problems. So it's a point on my Venn diagram. It is unclear whether factoring is here. It's certainly NP. So it's somewhere in this box. And it's also somewhere in BQP. It's unclear whether there's a classical algorithm or whether it's here. That's not known. Don't know. One more complexity class, and then I'm going to go on to completeness. So BQP was the quantum generalization of p. QMA is the quantum generalization of NP. Notice he actually got the name quantum Merlin Arthur. That's literally what it stands for. So QMA, some problem L is in QMA if, what do you think the definition is, there exists a scheme by which, yes, instances can be verified by a poly-time quantum computation. And there's one site. In this case, there's one-sided error and two-sided error. There's some details that are not so important for what we're talking about, bounded error. And here, the picture looks almost the same as before. Here's Merlin again. No, he has more of a conic. His gown, his little feet, and he has a magic wand. And the difference now is that, in addition to producing little stars, his magic wand produces a proof ket. And his proof ket goes into, I suppose I could draw Arthur, but Arthur is a quantum computer in this case. I think I'll just draw a box. So this is a quantum verifier at the end of which I will do a measurement. And this measurement outcome will be except with some probability. And that's, again, why you need bounded error rates. You can think of this quantum verifier as just being like a big unitary. It's just a quantum circuit. And at the end, I measure one qubit at the output. It's perfectly reasonable. So QMA lives out here. Actually, this is just, I mean, this is the obvious inclusion. Everything that you can do in NP, you can do in quantum QMA because you have a classical verification circuit that's also a quantum one. It has no error rates. You don't actually need. You can just give it a classical basis state as the proof. So that's simple. So whether BQP equals QMA is analogous to the P equal to NP question. Whether BQP equals NP, also a big question. This is what D-Wave would like. And everybody who wants to build quantum computers to rule them all, that would be a big deal. But actually, these are sort of the relationships as they're understood right now and mostly believed, I think. The relationship of BQP to P is, I think, separated, actually, but only under the assumption that NP is different. So there's a smorgasbord. We have these four things. How much time do I have? 20 minutes. So any questions about that? Because I'm about to shift just a little bit. Yeah? You could give it a simple proof or something you want to do. Yeah, it's generally hard to prove, because it means that out of all possible algorithms, none of them are good. For very stupid problems, you can show lower bounds. So things like the Grover problem, which is someone gives you a telephone book and says, find the entry with this phone number. Is it in there? And so you have to basically go through and just look. Classically, you can prove a lower bound that you need to take linear time in the size of the phone book, which, of course, is a big thing. So that's a lower bound. But there aren't very many. And that's a very unstructured problem. It's searching for the needle in a haystack. So, huh? What was the purpose of the video? Not on the telephone number. Yes? Yeah? So just like a professional algorithm believes to be in P, is there any algorithm that believes to be QMA but not BQP? Are there problems that are believed to be QMA that are? Yes. And yes. And I want to tell you, actually, I'm going to tell you a problem which is believed to be NP but not P, which you probably have all heard of before. It's called satisfiability. That's what I'm going to talk about now. And then there's a direct generalization of that, which is the prototype. In fact, there are lots of problems that are believed to be in NP but not in P. And I mean, not proven to be because we don't know P doesn't equal NP. If someone proved that, then everything would collapse. But believed to be. And there are a bunch of problems which have a similar status here. They're known to be in QMA but they're believed not to be in BQP or P. So that's exactly what I want to talk about next. You've talked a lot about it. Yes, that's a great question. It's a very practical question, actually. So I've said everything is efficient up to polynomial overheads. So in particular, say that I have some machine. I also said reasonable somewhere. So let's say that to address a piece of memory takes me time order one. So if I have polynomial time, I can't use more than polynomial memory because I can't even look at it. And that's actually part of the reason that you can't give a quantum state to a classical computer is because you can't even look at the whole thing in a polynomial time. I didn't have to tell you that you weren't allowed exponential space because you could be given an exponential space and you wouldn't have enough time to look at everything. So this constraint actually in that because we've gone to these polynomial overheads, that constrains everything. Now, you can make much more and people do. I've just told you the four very top-level courses sort of complexity classes. You can subdivide and subdivide. And of course, it's not really what I specialize in, so I don't want to tell you about it. But also, in practice, we want to do computations that actually tell us answers. And in practice-efficient algorithms, it's very important how much space they use in your computer. And also, the polynomial overheads are kind of rubbish. So this is important to keep in mind. There's a beautiful theory, and that's what I'm telling you. And it's sort of one of the foundations of computer science. But the difference between n and n squared, forget about general polynomial overheads. What's the difference between n and n squared and in terms of an algorithm? Here's what it means. It means you take your computer, which you're running your algorithm on, and you take every transistor inside the computer, and you turn it into another computer. That's what you have to do to run your n squared algorithm in the same time on that computer. So all of this is efficient, blah, blah, blah. This is only efficient in a very impractical sense of efficient. And exponential really does separate from polynomial. That's still true. In practice, with most algorithms that people use, the powers, the n to the linear, the n log n, the n squared, is very important. And all the useful ones have very small powers. And usually with this kind of computer or that kind of computer, you can actually basically keep that exponent the same. You don't have to go from n to n squared if you have random access memory or linear access memory, you might. But basically, usually, you don't have to change it. So it's useful theoretically. But I like this, that you have to turn every transistor into a computer is a good thing to keep in mind when you're studying complexity theory. OK, so let me tell you about problems that are believed to sit here and why they're believed to sit there. That is, in NP, but not P. And the basic point, hopefully I can do this in the last 10 minutes. OK, I won't be able to prove it to you, but I'll be able to give you the idea. The key idea is something called NP completeness, or generally being complete. And these ideas are associated with Cook and Levin. And I might get to sort of describing their theorem to you by the end of the lecture. So the key idea is the notion of a reduction. So problem is spelling. Problem A reduces to problem B if there is A, if there is an efficient algorithm that takes instances of A and turns them into instances of B. So what does this imply? This implies A is no harder than B. Because if I had a decision procedure for B, then I could take my problem A, map it through this thing, run my decision procedure for B, get an answer, and then say, oh, I know the answer for A. So that means that A is no harder than B. An algorithm for B solves A2. Another way to say that is B is at least as hard as A. I believe that's called the contrapositive A, or actually A less than or equal to B, B greater than or equal to A. That's all I'm writing here. Sometimes it's slippery. So that's why I'm writing it out. If we believe A is intractable, so is B. Problem is NP complete, or complete for the class NP. We can generalize this definition to other complexity classes. If one, it is NP. It has to be in the class itself, so efficiently verifiable. And two, all problems, NP reduce. That's why it's called complete. That means that if I have an algorithm which can solve an NP complete problem, A, then I can take any problem in NP, I can map it to A, I can run my algorithm, and I can solve that problem. So this problem, this NP complete problem, captures the worst case difficulty of the entire the complete class NP. Let me write that again just to be really clear. Any ALG solving NP complete problem effectively solves all verifiable, all NP problems. So this sentence is actually why people don't think NP equals NP, basically. And let me show you that there are NP complete problems. So also, an NP complete problem, if you don't believe P equals NP, is hard. You won't be able to come over the algorithm which can solve it. So the classic canonical example. Actually, I was going to prove to you that this was the proof in pictures. It's just a little too long for me to do right now. So let me just list a few NP complete problems. So examples, these are NP complete problems. The canonical example is something called three-sat, or three-satisfyability, which is the problem. Well, actually, let me not define it for a second. I'm going to tell you some others. It might be more familiar. So ground state, less than E, for Ising, Hamiltonians. In a more frustrated magnetism language is basically Q state, Q coloring. That is, can you color a graph with Q different colors such that no two nodes, no neighboring nodes have the same color? That's what this problem, that's the question. This is, in physics language, this is the Q state anti-ferromagnetic POTS model, if that helps you. It's the same thing. Graph Q coloring. Ground state, for Ising, traveling salesman. Everybody's heard of that one. Is there a route for a traveling salesman to go around some particular set of cities, which has a length less than some amount? All of these problems are NP-complete. So that means that there's no completely general-purpose algorithm, if this is what we believe, that would actually solve those problems. How does the proof of that statement go that these are NP-complete? Well, first of all, they're all NP. That's easy to check. You just have to check that there is a verification scheme for them. That's usually clear. Provide a route. Check its length. Provide the coloring. Check it doesn't violate the constraint. Provide the state. Check that the energy is less than E. The completeness is harder. And the starting point is usually satisfiability. And what you need to show is that every problem in NP reduces to satisfiability. And let me just see how quickly. No, I really can't do it. OK, so maybe at the beginning of the let me just stop now. I'll take a few minutes before we start talking about Python maybe and just draw the picture because I think it's nice to see at four or whenever we start again. But let me not go into the coffee break.