 about Hamiltonian complexity. So, Sunday, she did her PhD in Berkeley on classical algorithms, and then we're very lucky that after she became a professor at UC Irvine, she switched to Quantum, and she has done great job in this field. And actually, also very importantly, she's very important for the community. Even she was like, I think in 2019, she received the TCMF Distinguished Service Award for her important role in making TCS more safe and inclusive. So, Sunday, the first years. All right, good. And I can see you a lot better if I stand here. Is this okay? Do you, okay, good. If I stand up there, the light shines in me and I can't see anybody's face, so I have no idea if you're understanding what I'm saying. So, let me just start by saying a little bit more about myself. As Alex mentioned, I started out in just sort of classical computer science theory. I was working on algorithms. And I worked on algorithms with sort of a computer systems focus, stuff like memory management and scheduling and resource allocation, but always from sort of a theory mathematical point of view. And about 15 years ago, I got bored and I needed to do something else. And the beauty of being in academia is that you don't have to ask anyone's permission to switch fields, you just do it. And so, I moved into the area of quantum computing, which was very different, very different mathematical tools and sort of it was a little tough to come up to speed, but it's been very rewarding and the field is just kind of taking off recently. So, I'm really glad I did it and it's been a fun adventure. But because I'm a computer science by training and I was never a graduate student in this field, when I'm talking a lot about physics, I'm a little bit on thin ice. So, I'm absorbing it and learning it. But I always approach it more from the mathematical computer science perspective. So, but we're gonna start with a little bit of physical motivation for the basic problem that we're gonna be talking about through the week. Let's see, turn this on. Okay, so a basic postulate of quantum mechanics is that any observable entity that you can measure, whether it's energy, momentum, corresponds to a Hermitian operator. Hermitian just means it's a matrix that has real eigenvalues, okay? And let's say we have an n-dimensional quantum system. If I measure that system, if I measure my quantity in the system, the outcome has to be one of a finite set of outcomes. So, assume for now, non-degeneracy, so all of the lambda eyes are distinct. If I have an n-dimensional system and I measure, say, energy, there are n-possible outcomes I could get from that measurement. And after I do that measurement, the system has to be consistent with what I just measured. So, after the measurement, the system has to be in a state that is an eigenstate of my measurement operator. And each outcome, each energy outcome corresponds to a particular ending state. So, if I measure that the energy was lambda one, I know for certain that my state is in the state v1. So, this gives rise to the Hermitian operator where the eigenvalues are the outcomes of the measurement and the resulting states are the eigenvectors of this operator. And if I view this thing as an operator, if I'm measuring some quantity a, I can see the operator as the unique linear operator whose eigenvalues are lambda i corresponding to eigenvectors v i. Now, if I'm measuring and my state just happens to be an eigenvector of this operator, then I know for sure what the outcome is gonna be. It has to be lambda i. But more often than not, it's just a generic quantum state. And I can express that state in this eigenbasis. And so, this is gonna be a superposition over all the possible eigenvectors of my matrix. And the probability, if my state is in the state v, the probability that I get some lambda i as the outcomes that say I'm measuring energy corresponds to the magnitude squared of the amplitude in this expression as I express v in this eigenbasis. And the expected outcome, of course, is just the probability, the sum of the probability of measuring a particular outcome times that outcome. And if you just work out the linear algebra, this is this inner product. So this is the probability. So this inner product is this amplitude alpha i. I'm looking at alpha i magnitude squared, that's expressed here. And so this is the probability of measuring lambda i, and this is the value of the measurement. And if I just sort of swap in this sum operator, so everything's linear, so I can take the fee, I can bring in this summation inside, I get basically the linear operator a. So if I look at this inner product, a fee, a fee, that's the expected outcome. So that's the average, let's say, a is the operator for energy, and I have some state that's fee, the average energy of the system is expressed by this inner product. And there it is in sort of matrix form. Now, the Hamiltonian, we're gonna be studying the operator corresponding to energy, which is called the Hamiltonian. And it plays an especially important role in quantum physics. And one of the areas where it's especially important is it governs the dynamics of a system. So according to Schrodinger's equation, this is the Hamiltonian operator H, the system evolves in time according to this equation. And we won't be looking at this too carefully, I'm just sort of giving a little background about the importance of Hamiltonians and why we care about them. But if I start out my system in some state psi at time zero, it evolves according to this equation, and it ends up, say, at time T in some other state, I can express this final state in terms of the measurement operator. It's E to the IHT psi of zero times the applied to the starting state. And a lot of really interesting work, I don't know if there's been a course on this during the school, but there's been a lot of interesting work in quantum computing in translating this time evolution into the circuit format. So how do, if I wanna compute this on a quantum computer, how would I convert this time evolution into a circuit and actually compute how a system evolves over time? That's not our concern today, but it's considered to be one of the most promising applications of full-scale quantum computers when we can build them someday. The Hamiltonian operator is also important because it tells us what the system equilibriates to at higher temperatures. So if I take my glass of cold water on a hot day like today and put it outside, it'll eventually reach some type of thermal equilibrium. Same true of quantum systems. If I take a small quantum system and connect it to a large bath and evolve the thing over time, it eventually will equilibrate to something. And the thing that it equilibriates to is called the Gibbs state. And this is generally a postulate of statistical mechanics. It's not really a proven fact, but at a given temperature, so beta is a parameter that scales inversely with temperature, the system will thermalize to what we call the Gibbs state and it's a mixed state where I'm taking, these are the energy eigenvectors again, and it's a probability distribution over those energy eigenvectors and it's inversely proportional to the energy. So and Z here is just sort of a normalizing factor so that these probabilities sum up to one. And another important sort of big topic in quantum computing research is sort of how to construct these states. Given a Hamiltonian and a temperature, how would I actually build the Gibbs state? I mentioned that this is sort of just a general, unproven but highly believed postulate in statistical mechanics, but it's actually been worked out quite nicely that in certain, for certain quantum systems under certain conditions, this is actually provably true and this is a nice reference for that. Okay, so now what we're gonna be focusing on is what happens when I take this Gibbs state and I drive this beta towards infinity. So beta's inversely proportional to temperature, so essentially what I'm doing is driving the temperature down to zero. And if I look at this Gibbs state, what happens when beta goes to infinity is, assuming that there's a unique ground state, is that this converges to the lowest energy state of my system. So if I look at Hermitian operator as a big matrix, it's gonna be the eigenvector corresponding to the smallest eigenvalue of that matrix. So the name of the game here is given a Hamiltonian for a quantum system, compute the ground energy, or more often in physics, the energy itself is sort of a proxy for what we wanna compute. You're actually interested in some properties of the ground state in some way, but we sort of get at it from a complexity point of view by saying how hard is it to actually compute the energy? Probably if you can do one, you can do the other, although that's not necessarily always the case. So this will be sort of the focus of what we're gonna do is the Hamiltonian itself is part of the input to the problem, and what we're trying to do is compute in the lowest possible energy state what is that energy, okay? So let me give a couple of simple examples. Here's the hydrogen atom, okay? Very simple quantum system, atom, electron, and the state of my system is the position of the electron relative to the proton, and here it's expressed in polar coordinates, okay? Here's the Hamiltonian that describes this quantum system. It's, we're not gonna get too into it right now, I just wanna sort of flash it up there sort of as a motivating example. We have a kinetic energy term, we have the potential energy between the electron and the proton in the middle, and given this equation we wanna know, say what's the smallest energy state, but in general here are the eigenstates of this operator. This probably looks familiar to you if you've ever taken high school chemistry. So it's kind of curious, this looks like a completely continuous system, but if I measure the energy and I look at the state afterwards, I'm gonna get one of these discrete options. So it's one of the sort of beautiful things about quantum mechanics, you start with what looks like a completely continuous system and we get sort of these discretized states. In general we're not gonna be looking at continuous systems, we're gonna be looking at sort of finite systems with finite dimensional particles. So in general the systems that will concern us here will be sort of a system of a finite number of interacting particles, and generally I'll use little n to denote the number of particles. Each of the individual particles has a finite dimensional Hilbert space, so express it in say a standard basis, there are d different states that this individual particle can take on, and collectively for the whole system, the dimension of the whole system is d to the n. So this is sort of our basic sandbox here, and now if I look at, we're interested in what I call local Hamiltonians, and it's motivated by the fact that I don't really think of particles that are far apart as interacting very strongly. So this notion is captured by what we call local Hamiltonians, meaning that I'm gonna have, if I just look at say these three particles, I can express the energy interaction between these particles, they're close to each other in space. And if I just look at these three particles, let's say for the time being I have two dimensional particles, so this is a spin, up spin down, I can think of it as a qubit for example. The Hamiltonian for this little three qubit system is an eight by eight matrix, two to the three by two to the three, okay? Simple enough, I can easily express that. But now, and if I look at, it's really these three little particles that are part of a larger system, okay? And so I really have to take this little matrix and express it as an operator on a large system. And how do I do that? I basically tensor it with the identity on everything else. So for this particular term, I'm just describing the energy between these three particles, and it's tensored with the identity with everything else. And if I look at this in matrix form, now I cheated here a little bit, just to make my matrix pretty, is that I'm looking at now H is interacting on the lower three bits as opposed to the higher three bits because the matrix looks nice block diagonal in this way. So if I look at this tensor product, what I really have is I have many copies of my little H, and the little H is this eight by eight matrix for every possible configuration or standard basis for the rest of the particles. So that's just basically the tensoring with the identity. And because I chose my operator to act on the three lowest order bits, I get this nice diagonal, block diagonal structure here. And if we look at a term that acts on another three set of particles, it'll just be sort of a permutation of this structure. Okay, so what I really have now is I have a sum of these things. I have the term describing how each set, potentially each set of three particles interacts with each other, and the total energy is just the sum of all of these things. Okay, so when I refer to a local Hamiltonian, it's a sum of terms. Each term is sort of a constant size little matrix that's telling me the interaction between say generally k qubits. In the previous example, k was equal to three. And I can have d-dimensional particles. The Hilbert space, as I mentioned before, said it's a huge matrix, but I have a very succinct representation of this now. If I restrict myself to local Hamiltonians, how many terms can I have if I'm saying that each term can act on at most k particles, I have n choose k terms. And each of those terms can be expressed efficiently as a d to the k by d to the k matrix, okay? So I have this super efficient representation, but what I'm interested in is the ground energy now of the system, the overall state that minimizes the total energy. Now, if I were to take this succinct representation and express it in this huge matrix form, d to the n by d to the n, all I'd be doing is computing the smallest eigenvalue of a matrix. So we know that computing the smallest eigenvalue of a matrix can be done efficiently, but efficiently in the size of the matrix. And in this case, the matrix is huge. So we're interested in algorithms that are efficient as a function of the size of the system, the number of particles. And d to the n is now exponential in the size of the system. So it is a simple operator if I could blow up this whole matrix, but in general, we're not gonna be able to do that because it's just too large to do. So we're interested in what is the ground state of the system. Now, there's a precision issue here is that I can't expect anyone to compute this energy too precisely. So I give myself a little wiggle room in the form of this delta. So when we express this, and computer scientists, by the way, love decision problems, it's sort of easier to get at from a complexity point of view. So when Alexei Kutayev first formulated this problem, he formulated it as a decision problem so that we can sort of study it through the lens of computational complexity. And that's sort of the basic idea of what we're gonna be doing here is given the input is this Hamiltonian in its succinct form and two real numbers, E and delta. And all I have to do is answer a yes, no question. Is the ground energy of the system less than E or is it greater than E plus delta? And typically we're thinking of this sort of threshold or precision as one over polynomial in the size of the system. Okay, so you can imagine different varieties of this local Hamiltonian problem. So the term locality is a little bit unfortunate and if we could rewind the clock, we might name things a little differently. So locality implies some spatial geometry, things being close to each other. In the mathematical sense, when we talk about local Hamiltonians, we're really only saying that each term operates on at most K particles. And it's not necessarily the case that there's an embedding of those particles in say three space where every term operates on particles that are close to each other. So it's really just saying, all it's saying is that each term acts on K qubits or q-bits, we don't necessarily know that they're close together. Okay, so it's slightly more general than what we think of as locality. We can vary the particle dimension. A lot of times we'll be looking at qubit systems, but you can imagine any finite dimensional particle. And finally, we will be looking at sort of geometric versions of this problem. So where the particles really are laid out in some sensible space. A lot of times what we look at are sort of lattices, two-dimensional lattices, one-dimensional lattice, three-dimensional lattice, where the particles are sitting on the vertices of the lattice and the terms are now two local. Each term is telling me the energy interaction between two neighboring particles on the grid. So this is a common format that we'll sort of encounter along the way. And generally these Hamiltonians, I should say, aren't capturing everything about a quantum system. A lot of times we think of the Hamiltonian as what's given in this problem. It's a project in and of itself and physics to sort of come up with sensible, useful Hamiltonians. And they're approximating something about the system. They're a simplification of the real picture. But hopefully capturing some important physical property that we're interested in studying. Okay, so when I present this to computer scientists, I always sort of have a couple slides on an example. And I chose this example. It's an example of a paper from condensed matter physics in which they are studying exactly this problem from a numerical point of view. I chose this particular paper because Steve White is my colleague at UCI and I could go to the next building and he could explain it to me. So that's why this, out of the thousands and thousands of papers in numerical condensed matter physics studying ground states of Hamiltonians, why I picked this one. So in this particular case, they're looking at this beautiful Kagome lattice. It's not a cubic lattice anymore. It's this nice particular structure. And the model, which is telling me the energy interaction between each pair of particles that are connected by an edge in this beautiful lattice is what's called the Heisenberg antiferomatic, ferromagnet. And antiferromagnet means that basically the particles want to be anti-aligned. So if I look at this, there's a energy penalty. Positive is bad, negative is good. So there's an energy penalty to having two of my spins either both up next to each other or both down. There's a reward for having them anti-aligned. And then there's this kind of hopping term that indicates the fact that these spins can flip back and forth. And essentially it means that my ground state is gonna be a true quantum state. And notice that I have, in this lattice, I have cycles that are of odd length, which means that I can't perfectly satisfy this need to be anti-aligned. So in general, it's gonna be a frustrated system. And what they were studying in this paper is they wanted to know not just the ground energy, but they wanted to know something about the structure of the ground state. So they were asking, what does the ground state of this model look like? So one possibility is that it's a valence bond crystal where you have these, where the neighboring spins pair up and are entangled with each other, but there's not entanglement between the pairs. So that's one option. The other option is that it's what physicists call a spin liquid, which isn't a super well-defined concept. But the idea of a spin liquid is that if I measure one of these particles, it looks completely disordered. So all the symmetry is broken, sort of like a liquid. And spin liquids are a little bit mysterious, you know, quantum matter that physicists are interested in studying and understanding their properties. And it's believed that sort of underneath the hood, a spin liquid actually does have quite a bit of structure. It looks disordered like it does here, but it might be the superposition of a bunch of different sort of valence bond states. So this is what they were studying. I think the upshot here was that it actually has been liquid properties. And they did this completely numerically. So this is on, you know, a system of like a few hundred qubits and they did it numerically to sort of figure this out. Steve, by the way, is the sort of developed one of the most successful heuristic algorithms for finding ground states called DMRG. And it's an inherently 1D algorithm. It works, it's sort of designed for 1D systems, but they've managed to adapt it to these 2D systems by having the 1D system kind of snake around the 2D system. So that's how that worked. And I use this quote from their paper, a key problem in searching for spin liquids and 2D models is there are no exact or nearly exact analytical or computational methods to solve infinite 2D quantum lattice systems. So this is a hard problem. We're gonna see it's a hard problem sort of from a formal complexity perspective, but it's also just a really hard problem in terms of the numerics. 2D seems to be a sweet spot where a lot of interesting physics happens and it's also hard computationally. So that's why 2D systems tends to be the focus of a lot of study. So we're looking at now this problem from a complexity point of view. So what's the complexity of this problem? And it's actually a quantum version of a problem that we know and love in computer science, which is we have a set of local constraints and we wanna find some global state that minimizes the overall cost. So computer scientists like this problem has a great appeal for us because it's a quantum version of something that we study all the time that's our sort of bread and butter. So let's look at what I'll call a classical Hamiltonian, which is a special case of the quantum version that I just talked about. I think of, I will refer to the standard basis as sort of expressing states that are strings where a string of N characters where each character can be zero through D minus one. Okay, so if it's a system of qubits, the standard basis state is just N bit strings, okay? And if I express my Hamiltonian in this basis and it happens to be diagonal, that's what I will think of as a classical system. So now I don't have quantum constraints anymore. I'm just basically saying, for each, I have my overall cost or Hamiltonian is a sum of terms and now each term is diagonal in the standard basis. So what that means is it's just a simple classical function for if I look at each HJ operates on some subset of the K variables and for a particular setting of those variables, it has a fixed cost, okay? So this is a completely classical problem. N finite dimensional variables and I'm telling you a little function for a subset of K variables if I set those variables to a particular value what the cost is. And I have a sum of these terms and I'm looking for a standard basis state that minimizes the overall cost. And I know that if my Hamiltonian happens to be diagonal in this form, then the solution, the overall solution is also gonna be a standard basis state, okay? So this is basically weighted constraint satisfaction which is sort of a standard format of optimization in computer science. It's a standard optimization problem. And even more famously, we have the Boolean satisfiability problem which is a special case of this in which the variables are Boolean valued, zero or one and my constraints are what we call clauses which is a disjunction of three literal. So it's either a variable or the negation of a variable. So if the input would be a set of these clauses and I wanna know is there a global assignment to these Boolean variables so that they're all satisfied so that the conjunction is equal to one. And local Hamiltonian, this is a special, yet a more special case of local Hamiltonian where I can express a clause. It's not only diagonal, it's all zeros with only one one along the diagonal. So if I look at X or not Y or Z, there's only one setting in which this clause is violated and I can express this in ket notation in the following way and so if I express this Boolean satisfiability as a local Hamiltonian, I'm asking does my Hamiltonian of this very special form have a zero energy ground state and that is true if and only if the set of clauses is satisfiable. Okay, so complexity, famous result in computer science that Boolean satisfiability is NP complete. The class NP is the set of problems in a nutshell that can where a solution can be efficiently verified. It may be hard to find the solution but there's an efficient algorithm to verify it. So if I think of this as a language, if X is in the language, the answer is yes. If X isn't, the answer is no. So if X is in the language, there's some efficient algorithm and there's a witness that will cause my algorithm to accept it and if X isn't in the language, no matter what witness you give my verifier algorithm, it will reject and that's clearly an NP because the witness is just the satisfying assignment. If I give you an instance of Boolean satisfiability and I tell you a satisfying assignment, you can check that quite efficiently and similarly for no instances, if I give you an unsatisfiable assignment, a Boolean formula, no matter what assignment I give you, you won't believe that it's satisfiable. So the input is a string that encodes an instance of SAT and the witness is the assignment itself. I think of a polynomial time algorithm as basically a family of classical circuits. So I have the input is X here, the witness is Y and once I've specified X and Y, it's just a completely deterministic circuit that says one or zero. A little caveat is that we want our circuit families to be uniform, meaning that the circuit itself can't encode an answer to a hard problem. So if I tell you the size of the input, you have to be able to spit out this circuit efficiently. Okay, so now we're gonna move and I need to change our terminology a little bit. We're not looking at languages anymore in which every input has an answer. We're looking at what we call promise problems. So remember when I told you we have to decide if the energy of a Hamiltonian is below E or greater than E plus delta. Well, what happens if the energy actually lies in that no man's land in between? Then all bets are off. I don't have to actually answer that correctly. So we call this a promise problem. So in decision problems, the answer is yes or no. X is in the language or it's not. In promise problems, we're partitioning the input strings into three sets. Yes strings, no strings and invalid strings. So let's look at a classical version of now a probabilistic version of NP since our quantum verifier is gonna be probabilistic in nature. It makes sense to step first to the classical probabilistic verifier. So MA is the class of Merlin Arthur. You think of Arthur who's not as smart as Merlin. Arthur's the verifier, but Merlin's the all powerful, knows the answer to the solution. So a promise problem is an MA. If there's now a randomized algorithm that it's gonna be a verifier, if X is a yes instance, then there's a witness that causes my probabilistic algorithm to accept with probably at least two thirds. If it's a no instance that no matter what witness I give it, it's gonna accept with probably at most one third and invalid there are no guarantees. Okay, so I'm off the hook there. And I can think of a randomized algorithm as again a deterministic circuit that takes in now the input, the witness, and a set of random strings. Once I determine those random bits, the circuit itself is just completely deterministic. And so now when we talk about the probabilities, we're talking about the probability where ours chosen uniformly at random from the set of inputs. Why one third and two thirds? Also actually I'll get to that in a second. So the quantum version of this is quantum Merlin Arthur, which is QMA. And now it's exactly like MA, except my quantum witness is a quantum state and my verifier is now a quantum algorithm. So if X is yes, there's a quantum algorithm and a quantum witness that causes the quantum circuit to accept with probably at least two thirds, no probability of accepting is at most one thirds and again invalid, no guarantees. And so this is the picture that I have. X is expressed in the standard basis, it's just a string. The quantum witness is input into the circuit and then I measure at the end for the answer, whether I accept or not. Now for the amplification part, one third and two thirds are quite arbitrary. I can think of it generally as completeness and soundness, so if it's a yes instance, I wanna accept with high probability, I'm gonna express that as C, the completeness parameter, if X is no, then I wanna make sure that I don't accept with probably more than S. We had two thirds and one third here before and it turns out that we can boost probabilities quite easily by just repetition. So as long as C and S are separated by one over a polynomial, I can repeat this procedure over and over again and boost the probability of acceptance to one minus exponentially small or exponentially small. So if I have a separation there, I can repeat this M times and the threshold for acceptance is the midpoint between the two and using Chernoff's inequality, you can actually show that if you repeat this enough times, polynomial and N with high probability, you're driving these probabilities down exponentially small, the exponential, the probability of error. Same with the quantum situation, we can talk about the completeness and soundness and again, we can drive this down exponentially small. There's a little bit of a subtlety here and it has to do with the fact that if we're doing this repetition, it's possible that there's some weird entanglement between the distinct witnesses that you're given. So for completeness, this is Merlin trying to be honest and trying to convince Arthur that it really is a yes instance. He can always give Arthur M sort of completely separate unentangled copies of the witness and then each run of the verifier is a separate probability, they're all independent and then you can take the majority. For soundness, there's a little bit of subtlety. I mean, it's possible that Merlin gives me some big entangled quantum state, but it's actually, it works out okay because even if this is entangled, if I think of my verifier as operating on say the first M qubits, if it's entangled with the rest of the states, it's just a mixed state in that case, which means it's a probably distribution over pure states and if it's true that for any pure state that accepts with probably most S, it's certainly true for a convex combination of those and so that can even if you condition on the outcome of the previous runs. I am not gonna get into this in too much detail but there's a really neat trick due to Merlin and Watris. In this, in what I described before, I actually have to give you more witness bits. There's actually a way to boost the probabilities with only one copy of the witness and it's this cool trick, it's called a trick but it sort of makes it seem small but it's actually a beautiful result that's been used in many other places and so we can define a version of QMA where I'm telling you exactly how many witness bits I'm giving you and it turns out that I can use the same number of Q bits in my witness and still drive those probabilities down to one and zero and so the idea, so let me just sort of give you an idea is that you think of the verifiers having to measure at the end and measurement is not reversible but there's in this particular context, there's a way to measure and back up and measure and back up and do that repeatedly and this sort of backing it, you can't always back up but in this context you can and this backing up trick has been used in a bunch of different places in quantum learning color algorithms, in cryptography, in all kinds of different contexts so this is a nice technique that's been used in multiple different places but it was originally introduced to show that you can get this probability amplification without amplifying the number of Q bits in your witness. Okay, so here are complexity classes, NP sits inside MA, a classical verifier is a special case of a probabilistic one, probabilistic is a special case of a quantum one and this sits inside, PP is the class, it's like MA but I don't have the margin of error, it's a randomized, the verifier accepts with probably less than a half or greater than a half but there's no sort of, there's no promise problem there and then this sits all inside polynomial space, Boolean satisfiably is complete for NP and local Hamiltonian is complete for MA, we will go through this result in great detail over the next lecture and then the counter parts of these are the versions without witnesses so this is just polynomial time classical computation, this is quantum computation and this is randomized classical computation and those sit inside each of those classes so this is sort of the complexity picture that we have. So let me just really be now precisely defined the local Hamiltonian problem, we're given a set of positive semi-definite matrices operating on KQ bits, dimension D, I'm gonna assume here that the norm of each is bounded by one and they're positive semi-definite meaning that the eigenvalues are greater than or equal to zero and each matrix indicates the set of KQ bits that it operates on as well, if that has to be part of the input, I'm also given two real numbers, this is my energy threshold and delta my margin and I wanna know is the smallest eigenvalue of the sum of all these terms the most E or at least delta, now I did sort of specify the problem a little bit more here in that I'm asking for the eigenvalues of my terms to be non-negative, I'm asking you to the norm to be bounded by one, that's not a big deal, if you gave me an instance that didn't have those properties I could sort of scale it so that it does so I can shift the eigenvalues by alpha which will sort of cause, which will give me this positive semi-definite property so I can shift everything up so that all the eigenvalues are at least zero and then I can scale it so that the norms are bounded by one, so that's not a huge assumption here but it's a convenient one to assume as we go forward. So for local Hamiltonian, we talked about the fact that it sits inside NP the witness is just a satisfying assignment, for local Hamiltonian, a natural witness would be the actual ground state, if I wanna prove to you that the energy's at most E I give you the ground state and then you can measure the energy. So the witness it's the ground state itself and I'm guaranteed that it's some margin so I'm guaranteed that I'm giving you a state where the energy's at most E or every state has energy at least E plus delta and so now I need to be able to devise my quantum verifier so this I'm just arguing this containment here, I need to describe a quantum verifier where I have some measurement that tells me something about the energy or the outcome of the measurement is at least proportional to the energy and how would I do that? Well, I'm gonna pick a random term and I'm gonna measure the energy, okay? So here's the term itself and now I want some measurement that's gonna tell me what this lambda is on average and I can devise a unitary operator that operates on those k qubits and has an extra auxiliary qubit, zero and unitary operator will map the jth eigenvalue keep that state unchanged but it'll rotate this qubit according to the energy of that particular eigenstate. So this is why I wanted my eigenvalues to be between zero and one so that it's a sensible probability. So now if I measure that last qubit, the expected value I get is exactly the expected energy of that term. So if I have some general state and I express that state in the eigenbasis for that particular term that I've chosen to measure, here it is, I have, this is my amplitude of being in that energy eigenstate for that term and I measure each eigenstate is gonna rotate that qubit by a certain amount and then if I measure that qubit, in this particular case, I'm gonna get one with probably lambda and zero with probably one minus lambda and if I'm averaging over all the possible, this is a superposition of different energy, eigenstates for that particular term. Probably measuring one is the probability that I'm in this particular eigenstate times the value and that's exactly the average energy for that particular term, okay? So I pick a random term, I've now devised a way to sample, to measure a bit and the probability of outcome is exactly the energy of that term. Now I can do this if I'm averaging over all possible terms, the probability of measuring one is now averaged over all the possible terms that I could have chosen. So it is the, if I pick HA, then the probability of measuring one is exactly the average energy according to this particular term. But now I'm averaging over all the terms so the expected outcome is the expected energy of my whole Hamiltonian divided by r, the number of terms. So I'm just averaging over the number of terms. So that's gonna be either, the probability is gonna be either e over r by my promise or at least e plus delta over r. And so now I just need to measure that over and over again so I can sort of with high probability determine which one I have and the margin is important here. If I'm flipping a random coin and I wanna know the probability that comes up heads, I need to have that little bit of margin of promise there. Okay, so let's just talk a little bit about hardness. What I'm gonna do in the remaining 10 minutes is step through, maybe it's review for many of you, sort of what that proof looks like that Boolean-Satisfilia is NP-hard in the classical case. And next lecture we're gonna walk through the quantum reduction in detail. Okay, so for hardness, I wanna take a generic language in NP. The only thing I know about my language is that it has an efficient verifier, that's all. So I have a verifier that says is X and L. If it is, then there's some witness that causes this output to be one. If it isn't, no matter what witness I give you, the output's gonna be zero. That's the definition of NP. And I wanna translate this into a Boolean formula. So I wanna say that Boolean satisfiability captures everything about this problem. So I want this formula to be satisfiable if and only if X is in the language. We're gonna be doing the same thing in the quantum setting. So we start with the generic promise problem in QMA. I wanna know is X a yes instance, or is it a no instance? The only thing I know is that it's a QMA, so I know that there's some quantum circuit verifier for it that works with high probability and is correct with high probability. And I wanna know is there a quantum input state that causes the quantum circuit to output one with high probability? And I'm gonna translate that into a five, initially five local Hamiltonian. And then we'll talk about different variations on this. And it should tell me, give the answer to this. So if X is a yes instance, the ground energy of my Hamiltonian has to be at most E. If it's a no instance, then the ground energy of my Hamiltonian has to be at least E plus delta. And the Hamiltonian itself, of course, has to depend on the input X because it's telling me something about the string X. So how would I do this for the classical case? Well, I've got the circuit. Circuits are pretty easy to translate into Boolean formula. So the reduction for input X, I use the length of X to figure out what that circuit is because it's uniformity. And then I convert my circuit to a Boolean formula. And then I have a term that penalizes the Boolean formula if the output is zero. So it forces the output to be one. So it's not too hard to imagine taking sort of a Boolean circuit if it ands and orgates and translating this into a Boolean formula. And this, all of these clauses say basically that the Boolean formula is enforcing that the circuit is executed correctly. I have a clause at the end that insists that the output is one. So it's only satisfiable if the output is one. And I also have, I'm hard coding the input into the circuit as well. More instructive for us is if we see this algorithm as a Turing machine, because that's gonna be a closer analog to what we're doing in the quantum case. So if I think of my verifier now as a Turing machine, if X is in the language, there's some input string Y that causes my Turing machine to accept. If X isn't, then for every Y, my Turing machine will reject. So this is basically the same picture. So I'm feeding X and once I fix the witness, that the algorithm is just a fixed deterministic Turing machine. And I wanna translate that into a Boolean formula. And this is sort of done with sort of the famous Cook Levin Tableau. So I can think of the computation of my Turing machine. A Turing machine has a work tape, which is I think of just basically a string of variables. And the Tableau represents the state of the computation at each point in time. So this is where I start with, no, I start up here, which is my input and the witness sitting on the tape and my Turing machine sort of executes state after state and in each after the first step, this is the state of the Turing machine after the second step, this is the state. And finally at the end, I should have an accepting state at the very end. So the height is the time of the number of steps my Turing machine takes, the width is the amount of space used. And now it's not too hard to use a little bit of local logic to enforce that this Tableau is locally correct. So I wanna make sure, I can think of each of these squares now as a variable that I'm setting and I wanna ensure that I'm setting these variables so that this whole Tableau corresponds to a sensible execution of a Turing machine. And I can do that locally. So time is going downward, so all I need is to, for every neighborhood of three tape squares, I need to make sure that the one below corresponds to a correct computation. And if there's no head in the picture, there's no Turing machine head, it just has to basically copy the results. So I can use local logic to ensure, and it's actually the same local logic, to ensure that this Tableau represents a sensible Turing machine. It's a constant circuit size that can do this. And each circuit can be then converted to a Boolean formula as I sort of expressed. So the circuit looks like this, which is sort of enforcing the correct computation of a Turing machine. And the output is one, if the cell contains a Q upset, the final state of my computation accepts a Q accept. So let me just kind of step back and say when we get into the quantum version, what we're gonna keep from this picture and what we're gonna change when we get to quantum. So some aspects of it will keep. We will be hard coding X into the circuit, okay? So the circuit has to depend on X because it's telling us, the ultimate Hamiltonian is telling us whether X is a yes instance or a no instance. The input Y, the witness is variable. I'm asking, is there a Y? So is there a way to set the values of Y that cause it to accept? So that will keep. And then we have a bunch of constraints that sort of enforce the fact that the state represents a sensible computation of my verifier. So somehow I'm encoding the computation of this verifier into the Boolean formula. When we get to quantum, we're gonna be encoding and adding constraints that say, if this is truly the ground state, then it has to be representing a correct computation of the verifier. So we're gonna keep that. And then an additional term to test if the computation accepts or not. So these are all the features that we'll keep. How this is done will be quite different in the quantum case. So, and we will go through that reduction in detail when we get there. So this is the part that's gonna be sort of look quite different when we get to the quantum picture. And that's what we'll do in part two. So, all right. Questions? I should stop. I hope you just interrupt me. I'm kind of looking out there. I see a few nods periodically, but feel free to interrupt me. Can I answer any questions? All right, that means the discussion section will be easy. So, all right. And so we'll meet with Cannell in a different room, right?