 Pleased to introduce a special guest, so Scott Aaronson from the University of Texas at Austin. So Scott is a recipient of the Waterman Prize and a Simons investigator. He's a leading researcher in quantum computation and he's very kindly agreed to come and give us a talk on the subject which seemed appropriate for our general interests. Okay. All right. Well, thank you, Rave, and thank you for inviting me to my first guest PCMI in Park City, surely not the last. So I did tell the organizers when they invited me that I have no particular expertise in QFT or bundles or categories or three manifolds or really any dimension of manifold. I'm a theoretical computer scientist and they said, that's okay. We just want your usual quantum computing talk with the jokes. So I said, all right, well, what's the level of the audience? And they were like, oh, well, it's everything from high school teachers to active researchers in quantum field theory. So I said, okay, I think I got that. So all right, so let's see what we can do. So this here is what you get if you do a Google image search for quantum computer. Well, it's one of the first things that comes up. So I am very far on the theoretical end of this field. I don't actually build them, but even I am pretty sure that that's not what they look like. But we'll be less interested in what they look like than in what they can and can't do. So in order to, and what are the connections to many different issues in physics? Okay, so in order to start talking about that, I first have to sort of explain the basic concepts of theoretical computer science. So I have three slides to do that. That should, you know, I mean, you guys are all studying way more advanced stuff. This should not be so bad, okay? So, okay, so what we mean by a problem in computer science is like an infinite set of questions. Okay, like given an integer is it prime or composite? You know, given a list of, you know, locations of stars. You know, can you reach all of the stars, you know, by only moving your telescope this much? You may recognize that as an example of a famous problem called the traveling salesman or sales person problem, okay? So then, you know, any specific number that I give you to say test its primality or not that I give you to compute its crossing number or, you know, manifold to compute its somology groups or whatever, you know, anything like that is going to be an instance of the general problem, okay? And so, you know, we typically, we measure the size of an instance, you know, in, for example, by just how many bits it would take to specify it. And then, you know, we're interested in, first of all, what problems are computable? You know, for which problems is there a general algorithm that will solve any instance and terminate in a finite amount of time? You know, and that can already be a very, very difficult question. For example, you know, it's known that it's decidable whether any two knots are isomorphic, okay? That's a very nontrivial result, okay? But in computer science for the last half century or so, we've wanted to do much better than that and say, you know, which problems actually have efficient algorithms, okay? And roughly what we mean is algorithms that are much, much better than the doofus brute force approach of just trying every possible solution one by one until you find one that works, right? Which is a solution that as the instance size gets larger will rapidly, you know, take much longer than the age of the universe, okay? Even if it's technically computable, okay? So our rough and ready criterion is we want algorithms that are polynomial time, meaning that they use a number of elementary steps that grows in most like n, the size of the instance raised to some fixed power or most as a polynomial in n, okay? Obviously n to the 10,000 is not very efficient in practice. You know, 1.001 to the n would be much better in practice even though it's formally exponential, okay? But, you know, this is sort of the first maybe question you could ask is what is the asymptotic behavior? And it's usually though not always pretty well correlated with is your problem tractable in practice, okay? So then, you know, we sort of have a universe of classes of problems that are solvable with different resources and one of the main tasks of theoretical computer science is to understand these different classes and how they relate to each other, okay? So the most basic of all the classes is called P, polynomial time, and it's just the class of all of the problems for simplicity say decision problems that have polynomial time algorithms, you know, on a standard deterministic digital computer like the one in your pocket, okay? Then the next famous class is called NP, non-deterministic polynomial time. You know, and the physicists, you know, have better names for things, you know, Big Bang, Black Hall, you know, we're stuck with non-deterministic polynomial time and so forth, although, you know, compared to SO3 and so forth, I think these are, these names are just fine. So NP contains all of the problems where if the answer is yes, then there is a short proof that it's yes that can be efficiently checked, okay? That can be checked by a polynomial time algorithm, okay? And a famous example is factoring. So to phrase factoring as a decision problem, I could say, for example, I give you a positive integer like say that one, and I ask some yes or no question, like does it have a factor ending in seven? In this case, I believe the answer is yes. If any of you want to check that in your head, if I made a mistake, you're welcome to do that. But in any case, you know, the point I want to make is we do not today know an efficient algorithm, at least for a classical computer, to self-factoring, the fastest known algorithm takes time that grows like exponentially with the cube root of the number of digits roughly, okay? At the very least, the best publicly known algorithm, okay? And actually from the Snowden revelations, there are some indications that even the NSA does not know much better than that. Or if they did, they would be doing something different than it looks like they are. So we do not know whether factoring is in P. That is a famous unsolved problem, okay? But if the answer to a question like this one were yes, there is such a factor, then it is very easy for some wizard to prove that to you, they just show you the factor. And then, you know, you can just plug that into your computer, factoring might or might not be hard, but division is certainly easy for a computer, and it just checks the factor, okay? So that's NP, and then the next really important concept is NP hard. So imagine that you had like a magic box for solving a given computational problem, okay? Then the next question that we ask is, now using that box, what other problems could you then solve? You know, like by just using, like you could say which problems are easy relative to this box? Like which ones can I do in polynomial time with the ability to make calls to this magic box, which following Alan Turing in the 1930s, we call an oracle, okay? So now if you have a problem where an oracle that solved that problem for free would make all of the NP problems easy, then we call that problem NP hard, okay? So NP hard informally means at least as hard as all the problems in NP, okay? And then NP complete just means the intersection of NP hard with NP itself, okay? So the NP complete problems are just the maximally hard problems in NP, right? They're the NP problems that are at least as hard as any other NP problems, okay? Now just from these abstract definitions, it's not obvious a priori that any NP hard or NP complete problems even exist, okay? So the big discovery in theoretical computer science in the early 1970s that really started this off as its own field in the first place was the discovery that literally thousands of problems of practical importance turn out to be NP complete, okay? So, sort of, given a set of suitcases, can they fit in the trunk of your car? That may be one that you've encountered. Or just given these constraints, can you schedule airline flights that will whatever make this much profit and not have planes crashing into each other? Super Mario and Minesweepers are actually known to be NP complete if you want the really practical ones, okay? But, so lots of problems from chemistry, condensed matter physics, finance, economics, I'm not gonna say every field, I don't know about art history or, but give it time, okay? So, in fact, when you meet like a hard constraints satisfaction problem, like a problem where you have a bunch of possibly conflicting constraints and you wanna find some setting of variables, either discrete or continuous variables that satisfies as many of the constraints as possible, right? The good rule of thumb is that such a problem will be NP hard unless it has a good reason not to be, okay? So, and it's what this means is that all of these, what on their face might look like totally different problems are there, here's another example, I give you the graph of who's friends with whom on Facebook, can you find 500 people who are all friends with each other? That's called the clique problem, okay? Or I give you a piece of code, can you, verifying that code is free of bugs can typically be reduced to an NP complete problem called a SAT or satisfiability. Okay, so all of these problems, although they come from completely different domains, we NP complete and this tells you that in some sense they're all the same problem. An efficient algorithm for any one of them would give efficient algorithms for all the rest, okay? So, by the way, the question of whether two knots are isomorphic, that's one of the ones that no one knows. Okay, so let me, so yeah, okay, so a good example of an NP complete problem, I give you a graph like this, I ask is there a tour that visits each city exactly once? Okay, so here is such a tour, it's easier when it's highlighted in green, to see that it's there, okay? Once it's there, it's easy to see and then it turns out that any other NP problem, like find the factors of this 2000 digit number, it is possible to construct a graph such that if someone found a cycle in that graph, Hamilton cycle that visits each vertex exactly once, then from that you could read out the factors of your number, right? That's what it means for Hamilton cycle to be NP complete. Okay, so here's a little map of where these things all fit together. So P has most of what we would do with our computers on a day-to-day basis. NP has a huge amount of what we would like to be able to do with our computers. And then the NP complete problems is this sort of giant set at the top of NP. You may have heard of the question of whether P and NP are equal, you may have heard of that is that not having been answered, we will come to that, okay? I wanted to point out also that there are many interesting problems that are in NP and that are not known either to be in P or to be NP complete. Okay, factoring is a famous example, graph isomorphism, not isomorphism, problems involving lattices in R to the N, and a bunch of others. These problems, these intermediate problems tend to be extremely important for cryptography, also important for quantum computing, as we'll see. Okay, so then the quantum generalization of the class P, which we'll talk about quantum computing soon, is called BQP, Bounded Error Quantum Polynomial Time. Okay, I drew it with this wavy border because of course everything quantum is spooky and weird. Okay, but this is a class that we know contains P. We don't know its relationship with NP. By, there's one of the biggest discoveries, I think, in the history of computer science came 25 years ago, when Peter Schor discovered that the factoring problem is in BQP. So a quantum computer can efficiently factor numbers and thereby break much of the world's cryptography. As you can see from this picture, we do not currently know if quantum computers can solve NP-complete problems. Okay, and that's one of the questions that will interest us. Okay, so if I had to pick a single scientific question that sort of maximally ties together all the different things that I care about, I think I would pick the question, is there any physical means whatsoever to solve the NP-complete problems or equivalently to solve any NP problem in polynomial time? Okay, or let's say efficiently. Okay, now what I love is that this is a single question that sort of includes within it at least five different sub-questions. I think any one of which could occupy someone for their whole career. So first of all, there's the famous question, does P equal NP? That is, could there just be a polynomial time algorithm running on a conventional computer that just solves all the NP-complete problems? Nobody has proved that that's impossible, so already that. Okay, now that's a purely mathematical question. There's an enormous amount to be said about it, but for me personally, it becomes maybe even more interesting when we bring some physics into the picture as well. And then we have to confront other questions, but you know, well, does nature give us computational resources that might go beyond this mathematical class P, right? This, so P, you know, you could define it in many ways, you could define it using Turing machines, which are these things that Alan Turing invented in 1936, but you could equivalently define the class P in terms of programs and you're written in your favorite programming language, whatever it might be, or cellular automata, okay? Or you know, just about any model of computation that is discreet, digital, deterministic is going to give rise to the same class P, right? So it has that kind of universality property. Okay, but you know, who says that nature has to be discreet and deterministic, in fact, we have some indications to the contrary, okay? So this brings us maybe to the next question, which is, you know, systems in nature like to somehow sit in their ground state, you know, their lowest energy state, but you know, minimizing energy is a perfect example of what's typically an NP-complete problem. So does nature just sort of magically make NP-complete problems easy? You know, can it do things that are just exponentially hard for digital computers, you know, even just in classical physics, okay? And if not in classical physics, well, then what about quantum mechanics? And you know, you've surely heard or read something about quantum computing. You know, how does that change the picture? Right? And so, you know, I'll try to sort of sort out a fact from fiction, you know, unfortunately, many, many of the popular articles say things about what quantum computers would be able to do that's very, very exciting and appealing and that's wrong. Okay? I mean, just like uncontroversially so, okay? So we'll talk about, you know, the current understanding of what a quantum computer would be able to do. And then of course there's the question, well, you know, regardless of what they can do, can they be physically realized? You know, can a scalable quantum computer actually be built in our world? You know, and that is a question for engineering, but you know, maybe even for fundamental physics as well. And I will say a little about that. And then maybe my favorite question, you know, is quantum computing necessarily the end of the line? You know, could nature via quantum field theory, via quantum gravity, via some other, you know, yet, you know, physics that we haven't yet thought to incorporate into our models of computation, could that take us even beyond BQP? Right, and so then we're asking just, is there any, you know, whatever the universe does, call it, you know, universe P? You know, does that class contain the NP? Okay, so the reason why NP completeness is sort of important to this discussion is that if not for NP completeness, you know, you could have imagined a priority that you would just have an enormous zoo of different incomparable hard problems, right? And for each one, you could say, you know, well, can we do it with a classical computer? Can we do it with a quantum computer? But there wouldn't be sort of any interesting general principles, you know, to be found. But what NP completeness says is that, you know, yeah, there are little sort of isolated villages of hardness, but there's also this gigantic metropolis of hardness, right, the NP complete problems. And so, you know, so it really makes sense, at least, you know, as a first pass, to sort of, you know, focus on, you know, how high can we get within NP? Can we get all the way up to the NP complete problems? I didn't mention this explicitly before, but there are also many, many computational problems that are not even in the class NP at all, okay? You know, the halting problem, whether a given computer program halts, would be a good example. Three manifold homeomorphism, something that is not known to be an NP, you know, as far as I know, it might or might not be, okay? So, you know, so there are problems where there aren't even efficient witnesses for a yes answer, but we'll be interested in the ones that do have such witnesses. Okay, so let me just, what I'm gonna do in the remaining time is just proceed through these questions and make some remarks on each one. Okay, so let's start with the famous question of just does NP equal NP? Now, you can tell that this is an important question because it's appeared on both the Simpsons, if you squint and a Futurama, many actually several other TV shows that were less good, okay, and it's also one of these clay millennium problems, the seven problems where you get a million dollar prize if you solve them, which includes, you know, the Riemann hypothesis, the Yang-Mills mass gap, the Poncare conjecture, which was solved by Perlman, although he declined the prize, okay, and a few others. My personal opinion is that P versus NP is far and away the most important of the seven. That's just my unbiased opinion, okay, but I can give you a couple of arguments, though. One is a million dollars is chump change, right? Suppose that you would prove P equals NP and via an algorithm that was actually efficient in practice, well then the first thing you could do is make about 200 billion dollars by just hacking Bitcoin, okay? That would be just like step number one in your plan for world domination, okay? Step number two, you could actually solve the other six clay millennium problems and pretty much all mathematical problems by just asking your computer, are you able, is there a proof of the Riemann hypothesis in some formal language like ZF-SET theory that has at most 100 million symbols? And the whole point is that if such a proof exists, it can be easily checked, and so in a world where P equals NP, it could also be found in polynomial time, okay? So of course you'd have to worry about is the algorithm really efficient in practice, but it would certainly be an unbelievable advance towards solving those hard search problems quickly, which includes a lot of what we try to do in mathematics itself, or some formalization thereof, okay? So if you, okay, so I should say that my own belief is that P is not equal to NP. I like to say that if we were physicists rather than computer scientists and mathematicians, we would have just declared that to be a law of nature. We would have given ourselves Nobel Prizes for the discovery of the law. If later it turned out that P equal to NP, we would just give ourselves more Nobel Prizes for the law's overthrow, okay? But okay, in an interdisciplinary subject like quantum information, you learn that people use terms differently, why physicists call laws, we tend to call conjectures, and so forth, okay? So I love many, many of my best friends are physicists. Okay, so, but okay, so there's an enormous amount to say about if P is assuming that indeed P is not equal to NP, which of course hasn't been proven, why is this question so difficult? Well, it's not the only question in math where we're pretty sure we can guess the answer, but proving it is unbelievably difficult, okay? But we can say a lot in this particular case about what are the barriers, why are the known techniques in logic, in combinatorics, and so forth apparently insufficient to resolve this question, why are new ideas gonna be needed? I should mention that some of the most recent ideas for tackling the P versus NP problem come from algebraic geometry and representation theory. It's called the Geometric Complexity Theory Program of Ketan Momoli. Okay, I think that in any case, there's going to be deep insights for many parts of math that are gonna be needed to make progress on these questions. Okay, so if you want the short version, you can read Stephen Cook's clay math description of the problem, or for the more masochistic version, you can read my 122 page survey article on P versus NP. Now, you can see the title has this question mark over the equal sign. In citation indices, the question mark was removed, and so the paper just looks like P equals NP by Scott Aronson, that would be a more exciting one. Okay, so all right, so let me just suppose that P is not equal to NP and move on to, well, if that's so, then are there physical mechanisms that could nevertheless make NP complete problems tractable that could just sort of zero in on the correct solutions without the astronomical amount of time needed for brute force search, right? I don't have to belabor for you guys that if we had even just 1,000 Boolean variables, then trying every possible setting of them by brute force, will take longer than the age of the universe. Okay, so there are lots of ideas about how you might get around that. One idea dating back to the 60s is, well, why not just take two glass plates and put some pegs between them in whatever pattern you want and dip that into a tub of soapy water and take it out and then just look at the soap bubbles that form between the pegs. Right now, we expect bubbles to try to reach a lowest energy configuration and you could hand wave that that means kind of minimizing the total length of bubble connecting these pegs together. Okay, but now there's a paradox or an issue which is that finding the minimum total length of line segment that could connect a finite set of points in the Euclidean plane like in this example here and where those segments could also meet at intermediate vertices. This has a name. This is called the Minimum Steiner Tree Problem and this is one of these famous NP-hard problems known to be. Okay, so are we saying that nature can somehow solve it near instantaneously that just by building some contraption with maybe 100 million pegs dipping it in soapy water, you can just let the soap break Bitcoin for you, let the soap solve industrial optimization problems and you can make a fortune that way. That seems sort of implausible to me on its face but there was a discussion of that online some time ago and someone was saying you're just a bunch of academics following a party line, not one of you know that this doesn't work, none of you have tried it out. That was what led to I guess the one foray into experimental physics of my career. So what I found was that I do recommend to anyone to try it except if you do use plexiglass so that you don't cut your hands, okay? And what I think you'll find is that with three or four or five pegs, the bubbles typically will find the optimal configuration that is the Minimum Steiner Tree. Okay, as you start adding more six pegs, seven pegs, you can get it to suboptimal configurations so the system can get trapped in local minima. Okay, and in fact it can even reach configurations that have like a cycle in them which then proves that they can't be minimal, right? Now I think that a priori this is what we should have expected. After all, if we had a rock and some crevice on a mountain side, such as you may have seen hiking over the weekend, right? Well that rock could reach a lower potential energy by rolling up first and then rolling down but it's rarely observed to do that, okay? So you know, so you know, this sounds like a banal point but like every year or so you will find popular science articles that say, you know, this or that team has found a way to make NP-complete problems easy. Okay, maybe sometimes it's using DNA folding. You know, this is every cell in your body, you know, finds a minimum folding configuration and thereby solves an NP-complete problem. Some, you know, recently there was an idea called mem computing that was supposed to do this as well. I think that every single one of these cases is boils down to the example that I showed before, right? It's, you know, you have systems that can get stuck in local optima and it doesn't always work. You know, I mean, okay, admittedly like I, you know, when I did that experiment I didn't try every possible brand of soap but you know, I think, you know, the expectation is, you know, you're not, you know, systems that just roll down a hill, yeah, you know, they're not always gonna reach the global optimum. There's no general principle saying that they should. And, you know, if you cared about something like proving the Riemann hypothesis or breaking Bitcoin, right, then you would have an enormous dimensional and incredibly rugged and complicated landscape of possible solutions, right, with no reason to think that local optimization will get you anywhere close to the right answer. Okay, and that's what people find when they try it in practice. Now, you know, in some cases, local optimization really does work well. Proteins in particular have been favored by natural selection specifically to fold in an easy and reliable way. Even then, they don't always do it. So prions, which are the agents of mad cow disease, seem to be proteins that just folded into a local optimum that was not the global optimum of the energy. So, yeah, but you know, the shortest answer to all of these claims is like, well, if you were right, why aren't you rich? Right, and you know, one can, especially in the age of cryptocurrencies, one can ask that question. Okay, so all right, so now let's move on to quantum computers. Okay, so, you know, you knew that was coming at some point. So do they change the picture? So, okay, so in order to tell you about this, I first need like a slide to explain what a quantum computer is. You know, the advantage for this audience is that, you know, you've been seeing, you know, not only quantum mechanics, even quantum field theory. You know, I mean, what I usually say is that, you know, if any of you haven't seen quantum mechanics, my advice would be that it's actually not nearly as hard as you may have been led to believe, especially once you take the physics out of it, okay? So, which is, you know, in quantum information, the way we tend to think about quantum mechanics is as a certain generalization of the rules of probability. And by the way, we typically only care about finite-dimensional Hilbert spaces and not about infinite-dimensional ones. So right away, that like cuts.