 A little bit more survey-ish in nature. We proved last time that local Hamiltonian is QMA complete. But there are various aspects of that Hamiltonian that are very unphysical, that look nothing like the type of Hamiltonian as a condensed matter physicist would look at. For example, there's no notion of true geometric locality in what we talked about. There are just these registered cubits and you can have these terms applied to five cubits each regardless of where they're located in space. Whereas really in a physical setting, the local terms are applying to particles that are close in proximity together. It kind of naturally gives rise to the question of can we make this more physical? And if we make it more physical, does the problem become easier or does it remain hard? So there were sort of a series of follow-on works that show that the punchline is basically, the QMA hardness is quite robust. Even under many different physical circumstances on assumptions, the hardness results still holds. So we're gonna sort of give a little bit of a survey of those. But first I'm gonna talk about the connection between this circuit to Hamiltonian reduction and a model of computation called adiabatic computation and sort of the connection between the two because there's a close connection. Probably, so here's our problem again, just stated we're given a sequence of Hermitian, positive semi-definite matrices, each operating on K-cudits of dimension little d, norm bounded and two numbers. The gap number is bounded below by one over poly and we wanna know is the smallest eigenvalue of the Hamiltonian resulting from the sum of the terms at most E or greater than or equal to E plus delta, okay? Yes? So just a really simple question. Just to see that it's in QMA to begin with, you just, the certificate is the state and then I just evaluate each of these H1 through H1. Yeah, so that was sort of the topic of one of the questions and also we discussed it. We touched on it in the first lecture. So if I give you the state, you can measure the energy with a probabilistically. So you can pick a random term and have a unitary that measures the energy of that term. So it's like a coin flip that corresponds to the energy of that term. So with enough measurements, you can sort of hone in on the correct energy. Okay, we've seen the class NP and the corresponding quantum version QMA. And in QMA, these problems are easily verified or efficiently verified by a quantum computer with a quantum witness, okay? And as we talked about, you can boost those probabilities arbitrarily close to zero and one. Just a recap for the classical counter part, Boolean satisfiability is in NP. Local Hamiltonian is in QMA as we just talked about. The witness is the ground state itself. So I hand you a state and you can using your quantum computer measure the energy with high probability. And we're always guaranteed this gap. We're always guaranteed that either the lowest energy of this Hamiltonian is at most E or it's at least E plus Delta. And that's an important thing because you can only ask to calculate the energy of a state with a certain level of precision. And in the QMA hardness, we took a generic language in QMA. And the only thing we know about that language is that there exists this quantum verifier circuit. So to answer the question is X a yes instance. The verifier, there's a verifier circuit that can take in as input this quantum witness. And when you measure the output of the circuit, it corresponds to whether X is in the language or not. And the reduction takes this circuit and builds a K local Hamiltonian so that the ground energy of my Hamiltonian and the Hamiltonian has to depend on the input X because it's telling me something about whether X is in the language. If the ground energy is at most E, then X is a yes instance. If the ground energy of my Hamiltonian is greater than equal to E plus Delta, then it's a no instance. And if it's in between, it's an invalid instance. And I don't really have, I don't have an obligation to answer correctly in that case. All right. And we started out with a log N local Hamiltonian and improve that with a unary clock to five local, okay? But again, in the ground state of this Hamiltonian that we built was the, what we call the computation state which is an equal superposition of all possible computation states starting with the correct input and using the correct witness. We had a term that sort of enforces the fact that the input has to be correct. We had a term that enforces, that penalizes rejecting outcomes, okay? So if it's gonna be a low energy state, it's a true computation. It's using the right input and it's accepting. And the spectral gap of this Hamiltonian is roughly one over T squared, meaning that you can't cheat, you can't get a low energy state without actually producing this state which gives you some information about X. And the Hamiltonian is some of those terms. Now there are different varieties of Hamiltonians you can consider. So for example, the version that we proved last time was five local. So every term acts on five qubits. But you can imagine different levels of locality, you know Hamiltonians that act on two qubits or three qubits. You can play with the particle dimension. The Hamiltonian of last time was qubits. And you can also enforce some geometry. And this is starting to look a little bit more physical because the types of Hamiltonians that one might study or the types of models what might study in condensed metaphysis typically have some geometric structure to them. So it's a common format is to look at particles on the vertices of a grid and look at two local Hamiltonians which have terms on pairs of particles that are connected by an edge. Okay, so what do we know? So this initial proof by Katayev back in 1995 which we talked about shows that five local, two state or qubit Hamiltonians is QMA complete. And then there were a sequence of improvements to this. And when I say improvements, I'm sort of bringing it closer to sort of what looks a little bit physical in terms of the types of models one might study in physics. And the first one we'll talk about is looks at Hamiltonian in two dimensions and shows that even if the particles are on a 2D grid and the terms are too local, it still is QMA hard. The original proof is six state. I'm gonna do a nine state one because it's a little bit easier, but it's a constant state. So we're now moving above qubits into sort of finite dimensional particles. This was later improved to two local, two state Hamiltonian. And then improved actually to show that two dimensional. So now this is in a 2D grid, two local on qubits is QMA complete. So this is starting to look much more like the type of Hamiltonians that might look at in physics. And finally, we proved that in one dimensions, it remains QMA hard. And this is a little surprising because typically numerical techniques work pretty well in one dimensions. Whereas in 2D, it's quite challenging numerically to come up with these ground states and ground energies. And this was later improved to eight state. Okay, so I'll walk through some of this, but I wanna talk about this paper first because this was actually not a QMA hardness result initially. I mean, they proved it sort of as a side result, but the intent of this original paper was to actually study the power of adiabatic computation. And it uses critically this circuit to Hamiltonian construction. So I'm gonna sort of give a little bit of an overview about what adiabatic computation is and how the circuit to Hamiltonian construction helps us compare the circuit model to the adiabatic model. Okay, so in adiabatic computation, you can imagine physically doing this. So you're starting with some system and your Hamiltonian has a very easy to prepare ground state, say the all zeros, like all spin down or something like that. And the target Hamiltonian is a ground state who is a Hamiltonian whose ground state answers the question to some computational problem. For example, the computational Hamiltonian that we have been discussing all along. And the idea is what you're gonna do is slowly evolve your system. So change the physics of your system so that you're slowly changing the Hamiltonian from this initial H start to this initial H final. Now you can imagine different paths through Hamiltonian space. The most common is just a straight interpolation between the two. Okay, and that's typically the one that's most commonly studied. And the adiabatic theorem says that if we do this evolution slowly enough, so if we're changing from H start to H final slowly enough, then if we start out in the ground state, we'll remain in the ground state with high probability. So how slow do we have to go? The speed has to be slow enough. It's determined by the spectral gap of the Hamiltonian. So you can imagine if there's a second highest eigenstate is really close to your ground state. It might be easy physically for the system to slip into the second highest state. So we need a large enough gap and the speed has to be at least one over delta squared where this is the spectral gap of the Hamiltonian. And then we make a final measurement to determine the answer to our question. Now the question, so this was initially put forth as a way of solving NP-hard problems. So the target Hamiltonian was the answer to some NP-hard question. And so the idea was can quantum computation help us solve classical NP-hard questions more efficiently? There's some evidence that adiabatic computation is more robust to errors. So, and this is actually a model used by some modern architecture. So D-Wave, for example, uses adiabatic computation as their platform, as their architecture. So typically, in the examples that you study, you know the spectral gap of H-start, that's not hard. You know the spectral gap of H-final. The tricky part is understanding the spectral gap as you evolve from one to the other. And this is highly, this is a question still of open research and how big that gap remains. Yes, yeah? I probably have it in the wrong direction. Let's see if I've got it. Speed has to be bounded, yeah, in the other direction. Sorry, yeah. And I've shown that slide quite a few times. So, the first one has got it, yeah. Yeah, actually I don't really remember, you know, the circuit model obviously is very brittle and the deeper you go. But I think there's some evidence that if you were to simulate this, you could still have some sort of natural errors and still remain in the ground state. But I probably can't say enough about this at this point. So I'll take a look at that again. Okay, so the question is, you know, what's that spectral gap? So the efficiency of this model depends critically on what that spectral gap is. If the spectral gap gets exponentially small, then your speed has to be exponentially slow and it takes an exponential amount of time to get from one side to the other. So we actually don't really know the answer for some NP-hard problems. There's been some evidence that for certain ones you can actually prove if you do the straight interpolation that the gap really does get exponentially small. So quantum computers aren't in fact solving NP-hard problems efficiently. But an interesting question arises is we now have these two models of quantum computation. How do they relate to each other? So can a quantum circuit always simulate what an adiabatic computation is doing if I know that there's a gap? And similarly, is the adiabatic model as powerful as the quantum circuit model? So are there things that we can compute with one efficiently that we can compute with the other? So it was known for some time that a quantum circuit can simulate adiabatic computation, but it was open for a while is whether the adiabatic model is as powerful as the quantum circuit model and that was answered by this result, this 2D result that we'll talk about in a second. So what we're gonna do here is H start is gonna be just a very simple Hamiltonian whose unique ground state is the all zero state. The final state in this case when we're actually trying to simulate what a circuit does is exactly this Hamiltonian that we've been talking about, this propagation Hamiltonian. So the ground state of this final Hamiltonian is exactly this computational history state. And if I could build that, then I would know the answer to the output of my circuit. So we're gonna have H final be this H prop that we've talked about, which sort of enforces the propagation terms and also enforces that the input is X. But in this case, there's no notion of a witness. We're just wanting to take a circuit with an input and simulate it. That's all we wanna do at this point. So there's no open witness. We just have all zeros and the X that we're trying to calculate as the input of the circuit. So H start has a unique ground state of all zeros. And the initial state of the circuit is all zeros except where I'm encoding my input, my classical input. And I wanna execute this circuit. So the adiabatic computation should end up in this state. And I can also with outloss of generality encode my input into the circuit. So assume that it's all zeros and then have a bunch of X gates at the very beginning that sort of set the input bits accordingly. So it's without loss of generality, I can assume that the input to my circuit is all zeros. So this is the goal is to end up in this state. And if I wanted to know now what's the output of my circuit up here if I were to measure this and I had a state prepared according to this computation state, how would I do it? Well, I'm not interested in the state of the computation at any old point in time. I'm interested in the state of the computation at time capital T. So the first thing I'm gonna do is measure the clock. Because what I have here is a superposition of all the points in time, but I want the final one. So I'm gonna measure that clock. If I get capital T out great, my state has collapsed to a state that's consistent with this measurement, which is exactly the output state. If I don't get capital T as the output, I throw out my hands and I start it again. So this is gonna be a repetition until I actually get this capital T as the measurement. But if I do get the capital T then I know that my state has collapsed to the final state of the circuit and then I can just measure the output then. So the probability of measuring capital T is one over T plus one. And here are the two Hamiltonians. The first one is all ones, it's expressed here in the standard basis. It's all ones along the diagonal and the only zero state is that all zero state. So here it's very simple Hamiltonian. Here's the final Hamiltonian, which is exactly the propagation one. And I guess I'm just taking the block that corresponds to the correct input here. We talked about the fact that this is actually much larger space. And the challenge now is to understand what is a lower bound on the spectral gap of the sum of these two, okay? And we can use exactly that lemma that we used talked about last time, which you explored in your exercises about the angle between two spaces and show that for any S in the range from zero to one, the spectral gap is exactly one over T cubed, okay? So using kind of the same sort of techniques as before. So this says to us is that, yes, I can use adiabatic computation to create this history state. And with enough repetitions, a polynomial number of repetitions, I can get the correct answer of the output of the circuit. And that's this. Okay, we'll talk about the 2D version in a second. Yes, could have shown that, exactly. Yes, it was, yeah. So right now we could have done this with the same old cataille of reduction. As it turns out, they were interested in both. Yeah, and that's what they did. And partly, yeah. So the question was, could we have proved this result, the equivalence of these two models, without going to 2D? With just the initial circuit to Hamiltonian construction that cataille of proved with, that was five local, and the answer is yes. It just wasn't kind of raised as a question yet at that point. It's a little bit better motivated in 2D because D wave and all these people are really have 2D architecture. So that's sort of an important question as a result, yes? Uh-huh. Yeah, so I actually, yeah. So I guess I'm cheating a little bit. This matrix here is the big matrix, sort of showing the whole space. And yes, this one is written in that basis that we talked about last time, but you can still show this, you still have this equivalence, yeah. Yes? Yeah, yeah. As a result, yes. Um, so it's gonna be a very similar proof to the one that we did last time using that geometric lemma. So you always can find a zero energy state, and then the challenge then is to lower bound the second eigenvalue. And it's gonna be basically the same kind of angle argument that we used last time, which it's gonna be the sum of two terms. We're gonna lower bound the angle between the null spaces and show that the second eigenvalue of each term is bounded from below. So it uses the same lemma. Oh, I see, why this? And it has to do with sort of the Hamiltonian evolution. So if I'm just sitting in a static, I'll give like a 60 second version of it. If I'm just sitting and my Hamiltonian is time independent, I'm evolving according to Schrodinger's equation, okay? And the idea is, is that if I'm slowly changing my Hamiltonian, the change in the Hamiltonian is small enough, I'm still continuing to evolve according to that Schrodinger's equation. And if I consider sort of little time steps in which I'm sort of slowly changing this Hamiltonian, the change in the Hamiltonian isn't enough to sort of pop me out of the ground state. So it's related to that Schrodinger's equation, showing that evolution over time and that if there's a tiny delta in the Hamiltonian, that you will remain in the ground state in a sense. I think so, yeah. And for a while, it was sort of folklore and it was sort of written down more precisely when adiabatic computation sort of came around that people were more interested in it. All right, any other questions? Okay, so let me say a little bit about the 2D construction. By the way, I have a lot of material we're just gonna get as far as we get and that's okay. So, and with lecture four, I'm just gonna start in on the commuting local Hamiltonian problem. So we'll just sort of progress and I'll tell you as much as I can about these various results. So now let's talk about the 2D part and what was required to do this. So in Cataeus construction, you had these two registers. One was a computation register, one was the clock register. And the terms, each five local term applied to two qubits from the computation register and three qubits from the clock. So who knows where they're located in space and whether those are sort of close to each other. Now instead, we're laying this out on a 2D grid. So instead of having the clock contained entirely in one register and the computation contained entirely in another register, we're taking the state space of a particle and each particle isn't gonna encode a little bit of the computation and a little bit of the clock. And so we're sort of taking this clock and distributing it geometrically over the Hilbert space. Okay. And it's a little, I found it a little easier to illustrate with nine states. They got it down to six, but so I've shown these colors here and those colors represent the clock state. So a pattern of colors on the 2D grid represents a point in time. And we'll talk a little bit about more about how that's done. Some of these colors represent qubits. So actually have two dimensions inside them and will represent a qubit, okay? And so total I have the tensor part of zero and one times these colors, which gives me six states plus these other ones. Okay, so what does it look like? So if I have this 2D grid, I'm thinking of time going from left to right. My computation space is vertical n. So what this sort of picture represents is basically I've executed three different layers of my circuit and now I'm working on the third layer of the circuit. And this little white particle here, this particle that is in a white state indicates where the action is. It's like a cursor. So I'm gonna make sure that my clock terms only apply if there's a white particle in the picture. And so this is where the actual state of the computation is encoded. So because I'm in the third layer, I'm encoding these particles and code the value of a qubit. So the white state has a white zero and a white one and then the green state has a green zero and a green one. So this will encode an n qubit register in this vertical column and these states are sort of done and these states have yet to trigger, okay? And so first of all, I don't want any just random pattern of colors. So I can always use these sort of forbidden terms to make sure that it's a sensible layout of colors and we talked about sort of how to do that before. So for example, I don't want a yellow particle which indicates that it has yet to trigger to the left of a black particle which has already triggered, okay? And so advancing the clock now implements the gates. So let's say I have a term that applies to white on top of green and I'm gonna advance the clock once and as I do that, I'm gonna execute a gate. So just sort of to, so the this term applies to this configuration and when it does apply to this configuration, it applies one gate of the circuit. So you need to first of all make sure that at most one computation propagation term applies to each valid clock state and that's what gives us this nice path of clock states. And so you can imagine what we call a configuration graph. A configuration graph is just a pattern of colors on this 2D grid. A bunch of them are invalid because I've sort of applied some energy penalty to them and some of them are valid. So if I look at this whole big graph where each vertex is one of these patterns of colors in the 2D grid, a whole bunch of them are invalid. Those have a penalty of one and the correct sequence of the clock going from time zero to time T constitutes a path. Exactly one propagation term applies to each of these configurations. So I get a nice clean path and a whole big mess over here but I don't need to worry about it because those are gonna be high energy states. Okay, so we got a little movie here of the 2D progression here. So this applies, apply the gate. Now applies here, applies the gate, applies here. And now I have to sort of move all the qubits over. Again, it's another little pattern of colors. I make sure that at most one propagation term applies at any given time. This will move that qubit over. I move the cursor up, now the cursor's red that indicating that I'm going upward. And that copies it over on up. And then I'm ready to start again. So that kind of, basically there are obviously a lot of little details in the, little fussy details in sort of making this work but that's sort of the general idea of it's not too hard to imagine how that could be sort of ironed out and to show that in fact the computation graph is a nice straight line. All right, any questions? Yes, sure. Oh, I have to go back to this movie. That's all right. Here. Yeah. Cause that's just a qubit. Okay, the star means it's some qubit. I don't know, it's either zero or one but it's some computational, it's the computational space. I don't know, I want to kind of leave it open. All I know is that I'm applying the gate to it. This, yes. So it would apply to any, any green state, any, any, you know, so I would have one of these for each 0, 0, 0, 1, 1, 0, 1, 1. So I'd have four of these copies of these showing exactly in order to sort of make it a complete unitary. Right, good question. Oh, did I miss a minus? I did, yeah, I should have minuses here. So if I give this talk enough times I'll debug it eventually. So, we get to see the movie again. Okay. Now let's talk about the 1D case. Just very similar ideas but there are a couple technical challenges to overcome and I'll just mention what they are. Just a little bit more about 1D. This result was perhaps a little bit more surprising for two reasons. One is that the classical counterpart is easy, classically. So, and the other one is that typically in numerics, 1D was kind of considered pretty easy and 2D is considered extremely challenging. So in 1D there's this technique called density matrix renormalization group by my colleague Steve White at UCI which has been enormously successful in the numerical regime at finding ground states of 1D systems. And then also the classical analog is in P. So it's not too hard. You can use dynamic programming this week from left to right or a divide and conquer strategy but it's not too hard to show that this problem is polynomial time solvable. So there's a big difference now between the classical and the quantum. And why the difference is because quantum superposition sort of gives us an extra dimension. So even though in quantum I have a 1D system I'm actually encoding sort of like a matrix through the quantum superposition of these different states. Whereas you don't have that in the classical world. Just a little bit of a sketch of how this proof goes. We're taking this n little n by T grid now and laying it out on the line. So the computation sort of proceeds from left to right. And again we have a little cursor where the action applies. So where the clock advance applies and as we go from left to right it sort of applies the gates of a circuit. And then now there's a question of now all the information of my computation is sitting here and I kind of need to move it over one in order to apply the new layer. So this little cursor and again a lot of fussy details sort of shuttles back and forth carrying the information back and forth between this set of q-dits and this set of q-dits. So there's an active site and it shuttles back and forth. One technical challenge that arose is the fact that it was impossible in this case we had before that nice clean picture of the configuration graph for a bunch of invalid states in only one chain of valid computations. In this case that wasn't possible to do. So if I look at the configuration graph and I think of sort of valid clock configurations or invalid clock configurations and I label them, this was the picture last time, okay? Where I have a whole mess of invalid clock states and only one single chain of valid clock states. In this case we have these extra chains meaning that I can have a valid clock state that's really not part of a valid computation but eventually according to, if I propagate according to the propagation term I will end up in an invalid clock state, okay? So when we're lower bounding the second eigenvalue this is where the correct ground state is gonna lie. We have to lower bound the second eigenconvalues of these blocks accordingly. So each of these connected components forms a block in my propagation term and I have to make sure to lower bound the lowest energy of these in order to make sure that my ground state doesn't lie inside those blocks. And so it's just basically, here's my propagation term corresponding to the propagation term applied to a chain and here's the single penalty. So this state is a valid clock state but I do eventually sort of propagate to an invalid one. And so again using similar techniques with this angle lemma I can prove a lower bound one over K cubed. So K is the length of the chain so I have to actually also upper bound the length of the chain. Okay so, and this was later improved. So another thing that makes this non-physical is the fact that this is just a highly engineered Hamiltonian where I'm like the Hamiltonian terms between this pair of qubits is different than the Hamiltonian term applied to the next pair, applied to the next pair. Whereas naturally if this, if your Hamiltonian is representing a physical system a material say, you'd expect much more uniformity than that, okay? You'd expect sort of all neighboring particles to be governed by exactly the same term, okay? So the question arose is if you actually have translation and invariance meaning if your Hamiltonian is exactly the same as applied to every pair of particles on the line does that make the problem easier or not, okay? So the Hamiltonian describing the energy of the system is the same for each pair of neighboring particles. Okay so now we have a little bit of a challenge just how to express this as a computational problem. It's not completely clear how to do that. So, and there are two kind of, there's a subtle difference between what I might mean by translation and invariance when I'm looking at it through the lens of computational complexity. So in the first version my input is the number of particles, the dimension of the particles and the terms themselves. So I'm giving you terms and then I'm asking you so I have, this is telling me this is the number of particles this should be pointing over here, particle dimension and these are the constraints to apply to each pair along. And if this is a 2D system I allow myself to have one term in one direction and another term in the other direction, okay? So here in a different stronger version of translation and invariance the problem parameters are, like the Hamiltonian terms are fixed as part of the problem and the only input is the size of the system. And think about the fact that when a physicist studies say the AKLT model it's considered a different problem than studying the Bose-Hubert model or some other model. So really the term itself characterizes the problem and you're just interested in sort of the length of the chain. So this gives us, the second version of translation and variance gives us a handle on sort of understanding the difficulty of different terms, okay? But it also presents a problem in sort of computational complexity. Remember when we do these lower bounds and these hardness results we're embedding an entire infinite family of problems into a Hamiltonian problem and there's not much to work with here. There's just a number. So it becomes more challenging to actually understand it from a computational complexity lens. So this is the variable constraint problem and this is the fixed constraint. So in that fixed constraint problem the problem parameters are 2D dimensional particles and the term that's applied to them and the input is just n written in binary and it's the number of particles in the system, okay? And the question is how hard is this? And the output is, is the energy at most some value that's a function of n or greater than some other? And again, I need at least a one over polygap in order to sort of make sense of this and to do this. Now, first of all, notice that the input size in computational complexity we judge the efficiency of our algorithms as a function of the length of the input, okay? When we talk about polynomial time we're talking about a polynomial in the length of the input. Now notice that the length of our input has just gotten exponentially smaller. Before when I was describing the Hamiltonian the input, the description of the Hamiltonian scaled with the size of the system and now the length of the input is logarithmic in the size of the system, okay? So this is gonna change the complexity and so basically you're gonna see in the results additional x's in the result. It doesn't mean that the problem got harder. The ferrifier is still running in time that's polynomial in the size of the system but it's exponential in the number of bits required to specify the input which is kind of the standard we use in complexity theory. Yeah, or just sort of understand that the x's are there sort of artificially. I'm trying to remember we talked, we thought about that and for some reason we didn't do that and I forget why but there was sort of a little glitch of a reason but yes, it is a little, because we really want, we're expecting for our algorithms to run scale with the size of the system certainly. Yeah, yeah, exactly, yeah, so what, exactly. So here instead of QMA, we're gonna be looking at QMA-XP which means that the size of the witness is now exponentially long in the input to the problem as is the running time of the verifier but remember like we just said this is really still polynomial in the size of the system. Yeah, that's, I think that's what we ran up against. There was some sort of little, I think the problem is that the language is sparse then and once you have a sparse language, then sort of, I think it becomes easier and there's sort of results on sparse languages and why they're sort of like exactly so that if you're given sort of a uniform hint along with it, it sort of breaks the complexity result. I think that was what we kind of ran up against, so. So here's the name of the game is we have some fixed Hamiltonian term and some language, it's actually a promise problem in QMA-XP and we're translating the language into the Hamiltonian term but now the Hamiltonian term is fixed once and for all and is input independent. In the reduction, we're gonna take an instance X and we wanna know the answer to this and we're embedding it in this Hamiltonian problem and the only way we can do that is embedded in the size of the system. Okay, so this is the only mechanism we have to encode the input and so the Hamiltonian will encode information about the problem we wanna solve but nothing about the input, it's input independent. Okay, so and we want the same kind of thing where if there exists a witness that causes the verifier to accept, then if I take this Hamiltonian term, apply it to every pair of particles in a chain of length N, then the ground energy is at most P of N and similarly, if it's a no instance, if every witness causes the verifier to accept with high probability, then the energy's at least some value, okay? So how do you even sort of make sense of the length of the chain? And what happens is that we run sort of, we're embedding two different computational processes inside this Hamiltonian, using this sort of standard circuit to Hamiltonian construction but now they could do two things. I'm running two different processes in parallel. The first process is just a silly little binary Turing machine that just increments a counter over and over again and executes for N steps, okay? And then we're gonna actually, we'll have some string at the end of this process, we'll have some string sitting in our computation space and we'll use that string as the input. So the binary counter Turing machine is just starts with zero and continually just increments that counter. After N steps, some string is gonna appear on the tape, okay? It's gonna be roughly a proportional to the, when you think of a binary counter I'm gonna be able to roughly count up to N, there's gonna be a little overhead in my Turing machine but the length of the string is gonna be roughly logarithmic in size. So I'm executing for N steps, this binary Turing machine and I end up with some string, then that is the input to the verifier. So the reduction asks, if I wanna know the answer for input X, how many steps of my binary counter Turing machine do I have to execute until I end up with the string X sitting on my tape? That's the reduction and that's the end that I choose. Any questions about that? So you can think of this, I wanna embed the answer to some computational problem with input X. How do I do that? Well, I'm gonna figure out this binary counter Turing machine, how many steps does it have to take in order to write X and then I make that the length of the string or the chain. Does that make any, can I answer a question about that or? And we know that any Turing machine can be made quantum. So simulate, so the computation that's embedded in the Hamiltonian, in the ground state of the Hamiltonian simulates this binary counter Turing machine for n steps and then simulates the verifier on whatever string happens to be sitting there. Yeah, okay. The reduction is, it's a very simple computation. So it's, I can describe for you this binary counter Turing machine and you could figure out if I want it to count up to the binary string corresponding to X, how many steps of that Turing machine it takes. That's the reduction. And then in n, the number of steps it takes is gonna be the length of the chain. That's gonna be the input in the reduction. So it's gonna be the length of the string X that I end up with is gonna be exponentially small in the length of the chain and that's okay. My process is gonna be polynomial in n the length of the chain and it can sort of execute this binary counter Turing machine in time that's polynomial in n the size of the system and then it ends up with X sitting there on the tape and uses that as the input. So the reduction is just this understanding of what this binary counter Turing machine does. The Hamiltonian is the thing that executes this binary counter Turing machine and then executes is this. It's a computation, it encodes a computation which simulates the binary counter Turing machine for m steps and then simulates the verifier. Yes. Okay, so we've talked about embedding and I'm gonna kind of skip over some of the details but let me just sort of just give you a little bit of an idea. I've been using all along the quantum circuit model as my model. The original quantum computation model was a quantum Turing machine back in the days of Bernstein-Vasarani. So the original one, we know that every Turing machine can be made reversible and there are sort of natural definitions of quantum Turing machines and that's what I'm embedding inside this Hamiltonian instead of executing a circuit model, I'm actually executing a quantum Turing machine model. The beauty of Turing machines is that they have this sort of translation invariant sort of set of rules. So if I tell you a Turing machine, it can compute any string in the language and I can just apply the same finite set of rules to every sort of location on the tape as opposed to circuits which are highly positioned dependent entities. So you can imagine a string of particles and one particle is in a special state which encodes the head of the Turing machine. Okay, sort of like the cursor that I had in my clock constructions, okay? And now my clock only applies to locations where that head is sitting, okay? And what does a Turing machine do? It reads the current symbol, it overwrites the symbol, it moves left, it moves right. That's sort of a local little change and that's exactly embedded into the propagation term. So it sort of reads the current state and moves right or left according to this propagation term, okay? That I, it's sort of hard to describe but I hope that gives you a little bit of an idea, yeah. It's made unitary and that was done way back when by Bernstein-Vasarani. They showed that certainly any classical Turing machine can be made reversible, that's an important component. And then on top of that, you can actually have sort of unitary transitions. So a transition of a quantum Turing machine is a unitary operation in which the head sort of writes a symbol, moves left or right but it can do multiple things in superposition. Yeah, all right, any other questions? Keep my eye on the time. And so the dimension of the particle is constant but it's a constant that only a theorist could love. I mean, it's a big number. You know it's a big number if you don't bother to calculate it in the paper. So there's sort of like, this is the chain of particles from left to right and a vertical strip is a single particle. So it's encoding a whole bunch of information. It's got multiple processes going on. And in particular, there's this clock and this may answer your question a little bit more. This little guy kind of shuttles back and forth in each step of the clock. As it sweeps from left to right, it triggers a transition of the Turing machine. So it's sort of, you know, I have n clicks of my clock for every step of the Turing machine. So there's an extra factor of n in there. And as it goes left, I will just say a couple of things now about sort of how we sort of translated this into infinite cases. And again, many physicists, they're really not really interested in finite systems. The finite systems are a necessary part of the fact that they're computing these on an actual computer but what you're into, because the boundaries sort of introduce all kinds of weird effects. And so what you're really interested in is sort of how your system behaves in the thermodynamic limit. So how it would behave on infinite systems. And translation invariance of course is important here because I can't specify in an infinite Hamiltonian every single term individually. Okay, so in the thermodynamic limit, it's translation invariant but each grid dimension has its own term. And what I'm interested now is in the ground energy density. So if I think of H of n as applying this Hamiltonian to an n by n finite grid, I'm gonna let n go to infinity but I wanna scale by the number of particles. So this is the energy per particle as n gets very large. And this is sort of the quantity of interest. So in this very strong version of translation invariance, my input, so we know in the sort of the weaker notion, the complexity of the problem. So if the Hamiltonian term is part of the input, first of all, one of the first results on the thermodynamic limit was a beautiful breakthrough result by Qubit, Perez, Garcia, and Wolf showing that in this infinite case, if I give you the Hamiltonian term as part of the input and I ask you, is the resulting Hamiltonian gapped or not? This is undecidable. And then we showed that in this input dependent setting it's also QMAX complete to determine the energy per particle. But again, this is the weaker notion of translation invariance. What happens in the stronger one? Well, here's the finite version that we talked about where you have the variable constraint versus the fixed constraint. Now if it's an infinite system, in the variable constraint, I still have an infinite family of problems in which to embed a computational problem. In the infinite case, there's nothing. There's just a number because the Hamiltonian term is just fixed. I let n go to infinity. That's all it is. It's just a single number. So how do I encode a hard problem in just a single number? And the answer is that it's an infinite precision number. So I can sort of embed the answer to different, the answer to different inputs in different locations on this infinite number. So we studied this as a function problem. So the input now is, what I'm asking for is the precision. So the Hamiltonian is fixed. So the energy density is a fixed number. And I'm asking given a level of precision, output an estimate of the true energy that's within one over two to the n. So this sort of captures the process or the effect that calculating every additional bit of precision numerically is harder. So you have to take bigger and bigger systems and understand, spend more computational time. So it's sort of capturing the computational effort in additional precision in sort of this function problem. Why a function problem? A couple of reasons. One is you could argue that it's a little bit more natural. I mean, no one hands numerical physicists two numbers and says it's bigger or smaller. So that's a little bit of an artificial thing that we adopt in computational complexity. But in the computational complexity sense is, it's hard to just ask about sort of what the nth bit is in this without actually knowing what the first n minus one bits are. So when we're looking at the complexity of the problem, it sort of makes sense to ask for all of them, yeah. Yeah, but the thing is that that was a sort of Hamiltonian dependent version of the problem. So I had an infinite family of Hamiltonians to work with in that computational problem. Here we don't, so. And I guess I'll, I mean, I don't have time to go into all the details of this, but basically this is the, we sort of move to a function version of the problem just because it made more sense in this setting. In order to sort of even talk meaningfully about the nth bit, you have to have calculated the first n minus one. And when you get into function problems, let me just function, when you talk about function problems, you actually need to talk about oracle classes as your complexity, complete class. So we sort of implemented a translationally invariant version of an old result, and I don't have the time, this dates back to the 70s and it is by control. And so if I have, imagine in classical, complexity, just a function version of weighted Boolean SAT. So I'm giving you m clauses and binary variables, and I'm giving you weights on those clauses. You're coming up with some binary assignment to those variables. And the cost is the weighted version of which satisfied clauses you have. So this assignment is either zero or one, depending on whether I've satisfied the clause or not. And now it's a weighted version of this. So some clauses count more than others. Is that, is the problem itself clear? So it's exactly Boolean satisfiability, except I'm sort of weighting the clauses, and I'm asking you to satisfy, to maximize the weight of the satisfied clauses. Is that clear? Okay. So it's exactly like Boolean satisfiability, except I've got to weight on it. The decision problem is still NP complete. I give you a threshold, can you satisfy a total weight of most T clauses? The function version of the problem is now complete for this weird class FP to the NP. FP are functions computable in polynomial time, and you have access to an NP oracle. And you think about why you would need that, is that you're actually computing a number, okay? So basically what you need to do is binary search. You ask your oracle, is the best value less than or greater than or equal to T, and sort of binary search on it. So you need to have multiple x queries to your oracle. And we prove similar results for this problem here, but we have all these x's in here for the same reason as before is that the input has gotten very small. So anyway, I will leave it at that and we'll pick up next time with commuting Hamiltonians. So any questions? Yeah.