 And, okay, part two today. So, thanks for coming out early in the morning. Appreciate it, I'm a morning person. Glad you are morning people too, or at least trying to be, I appreciate it. It occurred to me that it would be good to give you an outline for the week. I forgot to do this in my first talk. So, yesterday we did the introduction and motivation to local Hamiltonian. Today we'll be taken up mostly with the basic circuit to Hamiltonian construction. So, this will be one of the more technical lectures of the week. And then three will be a little bit more of a survey. I'll talk about sort of QMA hardness results past the basic circuit to Hamiltonian construction and what we've done with that and sort of brought those models a little bit closer to sort of natural physical models. And then four, I'm gonna change gears. And four and five, I'm gonna look at two different special cases of local Hamiltonians. And each of those lectures will be a little combination between survey, lay of the land, but giving you sort of a couple technical pieces that are sort of behind most of the work there. So, number four is the commuting local Hamiltonian problem. And five is stochastic Hamiltonians. And those kind of sit in between classical and quantum. So, we have a little bit more hope of sort of making progress on those. So, we have done number one. And so, we will go on to number two. So, just a little bit of a review. We talked about the class NP, which is essentially a problem for which it's easy to check a solution, although it may be hard to find one. So, if X is a yes instance, it's in the language. There's some witness, some classical string that a polynomial size circuit can check and verify that it's correct. And if X isn't in the language, no matter what string I give you, that circuit, that deterministic circuit is gonna reject. And we discussed the fact that Boolean satisfiability is NP complete. And so, now today we're looking at the quantum version of this. It's a little bit different because a quantum verifier by its very nature is gonna be probabilistic. So, we have to have the verification be probabilistic. So, a promise problem, we have a promise problem, is in QMA if there's a polynomial size circuit family, such that if X is a yes instance, there's some quantum witness, so that if I feed X in that quantum witness into the circuit, the probability I accept is at least two thirds. And if X is a no instance, no matter what quantum witness I give you, the probability of accepting on input X in that witness is gonna be at most a third. The witness is size polynomial in the length of the instance. And we will show today that the local Hamiltonian problem is QMA complete. On invalid instances, there's no responsibility to do anything, so all bets are off. And as we talked about last time, you can boost those probabilities arbitrarily close to one and zero or exponentially close. Okay, so, what does that, just to sort of show you the architecture of a reduction, let's just review what we did for discuss for Boolean satisfiability. We wanna take, we start with a generic language in NP. And the only thing we know about it is that it has a verifier. So, if X is in the language, then there's gonna be a string that causes the verifier to accept. And we translate this computation of the verifier into like a static Boolean formula. Okay, so that was by, you could do it by various means, but it's not too hard to translate a circuit into a Boolean formula. And so we wanted to have the, this Boolean formula to have the property, if there's a string that causes this circuit to accept, then the Boolean formula is satisfiable and that's an if and only if. Okay. I'm gonna look at a slightly more quantum-like version of this before we step over to quantum. So let's say instead of having a normal classical circuit, I have a reversible circuit. So we know that any classical circuit can be converted into a reversible one. And here it is, there's, this is still, these are little classical gates that do classical little bit of logic. And I have these wires going from left to right. And the input here is a classical string Y. The input here is the, into the circuit is the, Y is the witness and X is the input to the problem. And so I can just make a circuit reversible instead of sort of the standard classical way. And you can imagine placing a bunch of variables on this. Okay. So I, a column represents my computation at any point in time. And time is going from left to right. So if my Boolean circuit has takes, it has T gates, there's the width of this matrix here is gonna be T plus one. And the number of lines or the number of variables in my computation is gonna be M. So the total number of variables in this little matrix here is M times T plus one. And, and you can imagine little local logic that enforces that this is a correct computation. So here, for example, there's no gate present. We just wanna copy over the value of the variable from the left to the right. Here we have a little gate present. So we have to make sure that the output variables correspond to the gate applied to the input variable. So each is just a constant size little piece of logic that I'm gonna throw into my Boolean formula. And if the Boolean formula is satisfied, then this has to correspond to a sensible computation. Okay. Now let's try and step now into the quantum world and talk about what we're gonna do there. Now we have a quantum circuit. The input is still a classical string. The witness is a quantum state. And this is now a quantum circuit. At the end we measure and if it's one we accept, otherwise we reject. And we want to now take this generic language or a promise problem actually in QMA and translate it into an instance of local Hamiltonian. Same thing. We want if X is a yes instance, we want this Hamiltonian to have low energy. If X is a no instance, we want this Hamiltonian to have high energy. So we want the answer to this, the yes instances to correspond to Hamiltonians that have energy at most E. And no instances to correspond to Hamiltonians with energy E plus delta. And if we get an invalid instance, we're off the hook. Yes. So just to be clear, an invalid instance is just if it's in that interval, it's in the gap. For Hamiltonian, yes. But for QMA, it's quite generic. All I've told you is that I've categorized my strings into yes instances, no instances, invalid. So it's a very generic format. There's no Hamiltonian in that picture. It's just sort of a general promise problem. But here, yes. Those are the invalid instances. So we're going to start with the locality of our Hamiltonian to be log n. So it's going to operate each term. It's going to operate on log n bits. And then we'll improve that to five. And T is the number of gates in my computation. So let's take a look. So here we have this picture of our classical reduction that we did before. And here's a schematic of our quantum verifier circuit. And we have input x and the quantum witness here. And we have gates happening along these qubits with time going from left to right. So far, so good. And I can imagine what we're going to do is look at the state at each point in time. So the input here is just x and my witness psi. After one step, I've applied one gate. After two steps, I've applied two gates. On down, after t steps, I've applied t gates. So what we want to somehow create a Hamiltonian whose ground state represents this computation, just like we created an instance of building satisfiability whose satisfying assignment represented a correct computation. And what we're going to do is, in the quantum case, it's going to be a little bit different. And I'll discuss why in a second why it has to be different. We're going to actually, the correct ground state is going to be a quantum superposition of all of these different points of time in our computation. Now, why didn't we do what we did before with the classical picture? So your first pass, your gut instinct about this would be, OK, I'm going to have a quantum register representing this point in time, and a quantum register representing this point in time going from left to right. Any idea why that might not work? So that would be sort of mimicking the classical version. And the problem is that you can't check using local computations that you're copying over the state correctly. So for example, consider this really simple case where my gate is just an identity gate. And all I want to do, let's say I stored the point in the computation here in a quantum register and the point in the computation after this step in a quantum register. I want my Hamiltonian to check that those two states are exactly the same between those two quantum registers. And there's no local check that can achieve that. So in particular, there's no local check that could distinguish potentially a change in phase, relative phase, OK? And that's actually going to be one of the little exercises you do is just to verify that there's no local check that could do this. So one of the reasons that we have to step to quantum superposition to store the history state is that this just, this ski won't work, OK? So no local check can detect this. So here's the computation state. It's not showing everything. I don't know why I'm not getting all of the pictures. These are supposed to be slid over. So basically, I think I corrected this last night, but I have an old version of my slide. So I have a computation register and a clock register. The clock is showing the point in time. So I have clock time 0, 1, 2. And these are going to keep everything orthogonal. Note that these clock states are orthogonal to each other. So there's going to be no interference between the different computations. And what you can't see is that in the computation register, corresponding to the zero time is the input. Corresponding to the one time is the first gate applied to the input on down. And so if I look at this as a summation format, I want to create a Hamiltonian whose ground state is the sum over all points in time. It's going to be an even superposition. And what I'm going to store in each of these states is the clock register telling me what time it is, and the state of the computation after that many gates have applied. And this is a sum over orthogonal states because my clock register is making sure that these states are orthogonal. All right? We're good? OK. So how do we actually create a Hamiltonian that enforces that this is the ground state? That's the name of the game at this point. So let's just look at a simpler problem. Let's forget about the computation register. Let's just work on the clock for a second. So I want to create a Hamiltonian over s qubits. And s is going to be about log t because that's what I need to encode the time, whose ground state is the even superposition of zero through t. So I'm using the number t here to denote a binary representation of the number t. So the Hamiltonian that I'm going to use is the following. And let's just sort of pick this apart. Let's just look at one particular term. So this is one term. And this is the little matrix corresponding to the term. One-half's along the diagonal. One-half and minus one on the anti-diagonal. So this is corresponding to time t. This is t plus one. This little matrix should look a little bit familiar. Has two eigenstates, one-one and one minus one. This is the ground state, the even superposition. So what this tells me is that if I have t sitting in my ground state, I better have t plus one with exactly the same amplitude. So this little matrix, this little term, enforces if t is represented, then t plus one has to be represented with exactly the same amplitude. So now I've got a sum of these. I have this. In this corner, I'm enforcing that the amplitude of zero has to be equal to the amplitude of one. Adding in the next one, the amplitude of one has to be equal to two on down. So what this sum of all of these terms gives me is that the amplitude of zero through t has to be exactly the same. So the only ground state to this matrix here is the equal superposition of all the states. So the reason that we are starting with a log n local Hamiltonian is that this clock is representing a number in binary. And I have to operate on log n qubits in order to do that, log t. So here's the propagation, what we call the propagation matrix that's enforcing the equal superposition of different points in time. We've just argued that this eigenvector has eigenvalue zero. And it's by inspection, it's not too hard to see that that's the case. And the question is, what's the second largest? I want a little bit of a gap because I don't want to be able to cheat. I want to be able to enforce that the ground state is, in fact, this equal superposition. And any other orthogonal state has to have an energy penalty to it. So I not only have to argue that the ground state is zero, but I have to show that anything else is going to have high energy and it's going to have a significant penalty associated with it. Well, this is actually a fairly well understood matrix. If I look at the identity minus this a, I get this different matrix, which corresponds, which is actually a pretty well understood matrix because it corresponds to a random walk. And note that these two matrices are going to have the same eigenvectors. And there's going to be a close relationship between the eigenvalues. And if lambda is an eigenvalue of b, then one minus lambda is going to be an eigenvalue of my propagation matrix. So if I analyze this one, I'll know what the answer is for this one. So let's take a closer look at that random walk matrix. So b describes a random walk on a line. And the transition probably is just one half in each direction, all right? And so for any random walk, the maximum eigenvalue is always 1. And we know for this one, and I'm just going to sort of tell you that the second largest eigenvalue is at least 1 over 2 times the dimension squared. So this is something kind of a well-known fact about random walks. And so what this tells me is that for my real matrix, the one I care about, the propagation 1, the ground energy is 0. And the next highest eigenvalue is roughly 1 over t squared, which is good because I want a polynomial gap there. So I want a big penalty for not being in that state. Now we actually need to bring the computation part back into the picture. OK? And now what I want is for any state, if at point t minus 1, time t minus 1, my computation is in state phi, then I want at time t for the computation to be with gate t applied to that state. So no matter what state I am in at time t, at the next step, I want to have applied the teeth gate. All right? That's the name of the game here. So I know how to do this with t and t plus 1. Now I just need to work in the unitaries into this, OK? So the propagation term for step t looks like this. It looks a lot like the one just for the clock, but I've just added in these unitaries now applied to the computation register. So what this says is this, if I'm applying this to time t minus 1 and transitioning to time t, I better be applying the teeth gate to the computation register. And I need the backward propagation as well. And so if I'm applying this to time t and going back to t minus 1, I better be doing a reverse of gate t, OK? So now if I look at my little matrix again, I'm applying it to different basis states, but it's essentially the same thing. If I have a phi t minus 1, I'm going to have a u t phi. I'm going to have at time t, I'm going to have the gate u t applied to phi. So it's exactly the same term. I'm just sort of applying it to different states in my Hilbert space, OK? So basically, the propagation Hamiltonian, the propagation term, is just going to be a sum of all of these h t. So this is h t, and there's going to be h t applies a different gate to the computation, OK? All right. So let's look at this propagation term a little bit more closely. Now I'm applying it to a much larger Hilbert space question. Great. So the gates are unitary. So I'm taking a circuit with unitary gates, and I'm translating it into a Hamiltonian. My Hamiltonian isn't necessarily unitary. My Hamiltonian has to be Hermitian, and it has to have real eigenvalues. So this little matrix here has eigenvalues 0 and 1, OK? Yes. OK. We're tipping it here, and this is actually a little exercise question. But basically, you don't end up with a Hermitian matrix. Yeah. Any other questions? When you ask the question that we asked in the homework, that's a good sign. That means you're kind of getting it and asking the right question. So now we're looking at not just the Hilbert space of a clock. We have a big Hilbert space. We have a Hilbert space of the clock tensored with the Hilbert space of the computation register. So it's a much bigger matrix now. And so let's take a look at what that H-PROP looks like in that much bigger matrix. So I can imagine an orthonormal basis for the input register. These are just going to be the standard basis states, OK? I can imagine an orthonormal basis for the witness register. Anything will do. Now if I were to express an orthonormal basis for this entire space, the first crack might be saying, OK, well, I've got a basis for the input. I've got a basis for the witness. And now I have a basis for the clock. I can just tensor these together. The problem is that if I were to express H-PROP in this basis, it doesn't have a lot of structure to it that I can grab a hold of. So we're going to look at a slightly different basis. And I think I'm going to have the same alignment issue with this slide. Let's see if I can describe it. So basically the basis that I'm going to have is for each possible starting. So J and K are going to represent the input to my circuit. So I have a basis for, and actually you can see this part here. So I've got a basis for the time 0, which is at time 0 I can have any possible input and any possible witness. So I have a basis for what the clock and the witness look like at time 0. And instead of using this sort of more generic basis, what I'm going to do is the next step, the next state, will be time 1 with gate 1 applied to those two states. And the next one will be time 2 with two gates applied on down. So we want to argue that these are mutually orthogonal. So these are sort of what the basis states look like. So I have different times here, different starting states, and different number of gates applied. So if I have time t, I've applied t gates. And we want to argue that this is actually a basis. If the times are different, they're clearly orthogonal. If the times are the same, I want to argue that these two states are orthogonal. Well, I know that if I have a different input state or a different witness state, they have to be orthogonal because I'm starting with a basis for the witness register and a basis for the input register. And if I apply t gates to each, then they're going to remain orthogonal. So this will actually give me a true basis. So apologize for the bad alignment. OK, so this is a propagation term. The propagation, the whole propagation term of my Hamiltonian is the sum of all of these HTs. And if I look at, for one particular starting place, the span of this set of states, I get this nice matrix. So what this represents here is my Hamiltonian term h prop. Now I'm just limiting it to this little Hilbert space, hjk, hjk is the set of all states that I get starting with input aj and witness psi k and applying the different number of gates. So this state would correspond to the input of the register. This is after one step, after two steps, after three steps. So this represents the matrix that I get if I happen to start with this particular choice of inputs. And h1 takes me from time 0 to time 1, h2 takes me from time 1 to time 2 on down. So if I look at my big propagation Hamiltonian in this larger Hilbert space, but if I look at it in the special basis, I get this really nice diagonal structure. So it's going to be block diagonal with zeros here and zeros here. And if I look at one of these blocks, that corresponds to my Hamiltonian propagation term restricted to this Hilbert space, corresponding to input j, witness k. And then if I take one of these blocks and blow it up, I get my nice familiar propagation matrix looking. So just to review what this corresponds to is the space restricted to the computation states I get starting with a particular input and a particular witness. Is that, can I say anything to help clarify? Yeah, so just to parrot what you said, just to make sure I'm understanding. The top, each block is the sort of time evolution of the JK register for all times. And then each block is then another JK register for all times. Yes. So each block corresponds to a particular choice of input and witness. And then the block itself corresponds to how that state evolves over time as I advance the clock. And that's why I want to express it in these spaces because now I can sort of make sense of the matrix and talk about analyzing the eigenvalues of this thing. So the null space of H-PROP is spanned by the ground state for each of these individual blocks. I now have a much bigger ground space. So each, we argued before, that each of these little matrices has one ground state, and the next highest eigenvalue is 1 over t squared. But now I have a lot of them, and I can take any superposition of those ground states and get a valid ground state for this big Hamiltonian. So what I've done here is I've taken, so this is the ground state for one of these little blocks, psi JK. This is the even superposition of all of the computation states for a single block. So this state corresponds to the single ground state for this little block. But I can actually have any superposition of these and get a valid eigenstate with eigenvalue 0. And there it is. And what I've done here is I've just kind of reordered the linear combination, and I've brought this sum inward. So I can imagine any superposition of my input and witness registers, and then apply the computation to that. So this is just a switching of the summations. Here I'm taking a superposition of the JKs, and now I'm just bringing the sum, the JK sum inside. So what this represents is a particular input state that's a superposition of my basis of inputs and a superposition of my basis of witnesses and applying the computation to that. That's a lot of notation. It's a lot to take in at like 9 o'clock in the morning. But anyone want to ask me a question about what's up there? Basic problems about restrictions on the circuit doesn't need to be policed out or anything, or we just have a circuit. Oh, the verifier by definition of QMA is an efficient circuit. So by definition of QMA, my verifier circuit is a polynomial-sized circuit. That's the verification has to be efficient as an NP. So T, the number of steps in my circuit is some polynomial in the number of input bits. So everything is sort of polynomial in each other here. OK, so now the null space is all valid computation states, and the second smallest eigenvalue is 1 over T squared. OK, so now we have a huge ground space. I want a single ground state. Now we have a big ground space with all possible different inputs and whatnot. So now we need two more terms. Just like in the classical reduction, we had terms in our Boolean formula that enforced that it represented a sensible computation. We had terms that represented the fact that the input had to be the correct input, and then we had a term representing the fact that we insist that the output is 1. So we have to add those last two terms now to our Hamiltonian. OK, so first of all, I want to enforce that the input is what I want it to be. I have some arbitrary n-bit string. So my h in it says at time 0, this only applies to time 0, it had better be the case that bit j of the input register is equal to xj. So I'm going to introduce a penalty if the time is 0 and the bit is not xj. That's what this term says. All right? And remember, if you have a, this is like a penalty term. So if it happens that my state has this in it, it's going to have an eigenvalue of 1, which is going to introduce an energy penalty. So this penalizes any computation state in which the clock says it's time 0 and the input is not correct, all right? OK, so this enforces that the next term is going to enforce that the computation actually accepts. So the accepting business happens after I've applied T gates. So I'm only interested in enforcing this if the clock is at time T, capital T at the very end. And I'm going to apply a penalty if this output bit, and I'm going to just assume that the first bit is the one I'm measuring for the output of the circuit, if that output is 0. So this gives me an energy penalty if at the end of the computation I haven't accepted, all right? All right, so then we just add them together, OK? So now we have a little work to do in making sure the ground state is unique and this whole Hamiltonian is appropriately gapped, OK? So I want the unique ground state to represent a verifying computation with the correct input, a correct computation, and an accepting output. And I want anything else to have an energy penalty. So I want a unique ground state and for anything else to have like a 1 over poly energy, OK? So completeness says that if x is a yes instance, that the ground energy is small, OK? I'm not going to get 0 because it's going to be proportional to the probability of acceptance or rejection, so I'll get pretty close. But because it's a probabilistic circuit, I don't have perfect acceptance or perfect rejection. So x is a yes instance, then there exists some witness that causes the circuit to accept with high probability, exponentially close to 1. That's our definition of QMA. And now we have to go to our propagation or our big Hamiltonian that we've just built and make sure that there's a low energy state in this case, OK? Well, you've guessed it, the low energy state is going to be exactly starting with the correct input x, using the good witness psi, and it's going to be the computation state corresponding to that computation. So let's just verify that this state has low energy. So h-prop is 0 because it's a valid computation state. So I'm starting with perfectly good input and I'm applying the gates one by one. h and it is equal to 0 because at time 0, I have x sitting there in the computation register. So all of these are going to be 0. And what about h out? OK, so this is the h out term and this says at time t, there's going to be a penalty if the output bit of my circuit is 1. If the computation register has a 0, that's going to be a penalty. And so if I look at the portion, the term corresponding to the last time step, which is capital T, this now has, starting with the correct start state, I've applied all t gates and it has an amplitude of 1 over t plus 1. And if I take this state, this now, this state with starting with input x, witness psi, applying the t gates, I can separate it out. I'm just going to represent it slightly differently according to what that first bit of the computation register is. There's going to be some amplitude of having a 0 and a psi 0 for the rest of the qubits. There's going to be some amplitude of having a 1 there and a psi 1 for the rest of the qubits. And the probability of acceptance is this alpha 1 squared by definition because that's what the output is from my circuit. This is the probability that I measure 1. And this is the probability that I measure 0. So by definition, we've already assumed that the probability that I measure 1 is at least 1 minus 1 over 2 to the end. So this amplitude squared is at least this value. And so therefore, the amplitude squared of alpha 0 is at most I'm picking up 1 over t because this state appears here with amplitude 1 over t plus 1. And the magnitude squared of alpha 0 is small, 1 over 2 to the n. So we have a moment of silence just to drink it all in. Anyone have a question that they can ask? We're starting with the assumption on the circuit. And we're showing a low eigenvalue for the Hamiltonian. So the assumption for the circuit is that it is an accepting computation. So that's what I'm starting at. This is my if part. So if x equals yes, then there's a witness that causes the circuit to accept the type probability. That's what I'm starting as my assumption. And I want to verify that in this particular case, there exists a low energy state. And so what I'm verifying here is that in this accepting situation in which the circuit does in fact recognize that it's a correct input, that the energy is low. And so by definition of the circuit accepting with high probability, if I look at the state at the last time step and I separated out according to the value of the qubit that I'm going to measure, I'm going to get two different amplitudes. And this one has to be small. All right, good. So therefore, for my e, this is the value I'm going to accept because this is the energy of this state. So this is the low threshold. So if x is accepting, then the lowest eigenvalue of my big Hamiltonian that I've just built is at most this value of e. So I've just picked my e value. Now we have to talk about what delta is going to be. Now I'll sound this. So here's my propagation matrix. There's one particular part of it. We know what the ground state is. And this is a little bit of a sketch of what we're going to do here. So if the input is wrong, we know we're going to be able to incur a penalty or impose a penalty because we have this h-init term. So if this little matrix here, this block, corresponds to a wrong input, my h-init is going to impose a penalty. And it's going to have a pretty high eigenvalue for that. The lowest eigenvalue for that little block is going to be at least 1 over t plus 1. If it's a rejecting computation, I know that my h out is going to hit it with a penalty. So I know that my h out is going to be at least 1 over t plus 1 times the probability that I reject. So we need to make this a little bit more formal and lower bound the smallest eigenvalue for this matrix. So we have a bunch of different terms that we're adding together. So we need this little lemma here, which I will, I don't know if I'll have time to, I'll probably just walk through the proof of it, that helps us understand lower bound the eigenvalue of a matrix when we're adding things together. So here's what the lemma says. I have two h-hermation-positive semi-definite matrices, and I have their two null spaces. If each of them has their second eigenvalue is lower bounded by lambda, and the two null spaces are pretty far apart. So theta is telling me how far apart those null spaces are. Then the smallest eigenvalue of the sum is at least this quantity. And let's just understand what this is saying and why it would be true. I'm adding in two Hermitian matrices, OK? And one may have a nice zero space, but it's far from the other null space. So I can't be in both at the same time. So no matter where I am, I'm going to get hit by either the second eigenvalue of one Hermitian term, or I'm going to get hit by the second eigenvalue of the other Hermitian term. So let's just make this a little bit more formal. I have some generic state. The angle between these two null spaces is theta. So it has to be at least theta over 2 from one of the two null spaces. I'll just arbitrarily pick the n1. What's the expectation of this Hermitian operator applied to delta? It's the sum of these two. Everything's linear. And that's going to be at least the sum of h1, OK? And now I have this little picture here. If my angle from n1 is at least theta over 2, that must mean that I'm projecting a pretty good piece onto the orthogonal space of n1. So I'm far from the null space of h1, which is n1. And that must mean a pretty good portion of this state is projecting onto the orthogonal space. And we know that the second eigenvalue of h1 is at least lambda. So this little portion of this state is going to incur a cost of lambda. So if I look at the expected value of this state delta on h1, it's going to be at least lambda times the magnitude squared of this projection. And since this angle is pretty big, the resulting projection is also going to be pretty big. And that's what gives me the product. All right, questions about that? OK, so we're going to apply this lemma to us because we have a sum of terms. And we have exactly the situation. So we're going to apply this with h1 is the propagation matrix. And h2 is the sum of these two penalty terms. So the second largest eigenvalue of h prime is this 1 over t squared. What's the second largest eigenvalue of h2? What does h2 look like in that? If I look at it in that basis, h2 is just are these like penalties, right? I just sort of impose penalties on certain states. It's basically roughly 1 or 0 along the diagonal, or at least in it is. So the lowest eigenvalue of this one, this is actually, h2 is actually diagonal in the standard basis, and I have zeros or ones. So either I impose a penalty or I don't impose a penalty. So the second lowest eigenvalue of h2 is 1. So I'm safe there. So I'm going to use lambda is equal to this 1 over t squared. And now I have these two terms that say, so this is the argument that I just said, that the sum of these two is diagonal in the standard basis with integer values 0 or 1. So now I have to argue a little bit about what the angle between the two is, OK? There's going to be, I try to make this as notation soft and friendly as possible, but ask a question if you have it. I'm going to take some generic ground state of the propagation Hamiltonian. So this is some superposition of different inputs and witnesses states. And I'm applying the circuit to it and taking the superposition of the different states at different points of time. And now I want to look at this. So this is a generic state in the null space of h prop. And now let this pi be the projector onto the ground space of these two, OK? So this is the ground space of this is like the projector onto n2, the null space of the other term. And so the angle between them is going to be the maximum over anything in this ground space, the amplitude squared. If I think of this projector as projecting onto this ground space, the angle between them is going to be this cosine squared. And correspondingly, this is just basically saying the same thing in terms of the trigonometry it's the smallest state in the ground space. If I now take the projector onto the non-null space, so the non-zero space, so sine squared is going to be this value here. And I'm going to lower bound this. So what this operator is doing is it's projecting onto the space that's not the null space. And so if I can lower bound this value, then I will have lower bounded sine squared over theta 2. So this is just a technical way of saying that I'm going to lower bound if I take a generic state in the ground space of h prop and I project onto the non-null space of this other matrix, this is going to lower bound the magnitude of that projection. So if I actually, if I look at this, this is kind of what we're doing here. We're taking this lambda. It's a generic state in the ground states of h prop. And I'm lower bounding the magnitude of this projection here. So what happens? This thing is introducing a penalty. So this is 1 minus, this is essentially h in it plus h out. So I'm going to introduce a penalty if t is 0 and the input isn't right. Or if t is the last step in my computation and the output is 0, these are the points of time. And the idea is that I can take my generic ground state of h prop and separate it into two terms. One is in which the input is correct. And the other in which the input is not correct. And the basic idea then is that if the input is, which one is this, correct, by definition, this is where the rejecting part is important. If I have a correct input and a correct computation, by definition it has to be a rejecting computation because we're assuming now that this is a no instance. So if I start with a no instance, I execute the verifier correctly, I have to reject with high probability. And that's what this is saying. So I'm starting with a correct instance, this is a correct computation state, and the penalty for rejecting from my h out is going to be pretty big. I'm going to pick up this 1 over t because the time step only appears with amplitude 1 over t squared. And then this is the probability of rejection. Similarly, if I start with the wrong input, then I get hit at time 0. So this is the penalty for the wrong input, which is going to be at least 1 over t plus 1. And this 1 over t plus 1 comes from the fact that the 0 state here is only represented with amplitude 1 over t squared. So this gives it to me with sine squared theta over 2 is at 1 over t. I can put these together, apply my lemma, lambda is roughly 1 over t squared, sine squared over theta is 1 over t. And this tells me that my big propagation matrix in a no instance, the eigenvalue of that thing is going to be at least roughly 1 over t cubed. So this is the separation that I wanted. Yes instances had really low energy. There was a 1 over exponential in there. No instances are going to have at least 1 over poly energy. So recap, that was a lot of technical stuff for very early in the morning. We have a k local Hamiltonian where k is log n. And answering the question, is the ground energy less than e, 1 over 2 to the n, or e plus delta, will tell me exactly whether x was a yes instance or no instance. OK? Yes. It's purely from the clock. It's purely from advancing the clock. So remember, my clock term has to advance from t to t plus 1. And if my t is represented in binary, imagine going from 0, all 1s to 1, and all 0s. So the advancing of the clock has to change every bit of my clock register. And that's where the log comes from, because I'm representing my clock in binary. So to advance a binary counter, I may potentially have to change log n bits in doing so, which is a great lead into the next part. Yeah. Yes. Because it'll give me a basis of accepting witnesses, which will give me a basis back when we talked about soundness. It'll give me a basis of different accepting computation. So if I just take that input x and look at different bases for the witness, that'll produce different orthogonal computation states. Yeah. Say that again? Oh, log of poly is constant times log. Yeah, so we're good. We're still O of log n. All right, but the question about the clock, oh, yes, question. Yeah, but I remember I'm only implementing gates. So any particular step in my computation is only going to touch two qubits in my computation register, so it's log n plus 2. Yeah. Good questions. OK, now let's just walk through how we would improve this to 5. Instead of using a binary clock like this, so this answers your question about a binary clock necessarily might have to change log n bits if I'm representing the clock in binary. But what we're going to do instead is have a unary clock. So now the size of the clock has exploded from log n to t. That's OK, it's still polynomial. But now I only need to touch a constant number of bits in order to advance the clock. So advancing the clock touches these three qubits. And so if to go from t to t plus 1, I only need to operate on t minus 1, t, and t plus 1. And that's where I get the five locals. So this is operating on two qubits because a gate operates on two qubits, and this is operating on three qubits. And now to test whether I'm on time 0, time 0 just says that the first qubit of my clock is 0. Time t says the last qubit of my clock is 1 in this unary clock picture. So it's a much simpler operation to test. But I need one more set of terms. My Hilbert space has exploded now. I went from a log n log t register to a t register. So I need to do something about all those other states that aren't valid clock states. So now in a unary clock, I only want to see these states in my clock register. Having some jumble of 0s and 1s is going to make no sense in terms of the clock. So I need to penalize those. So I just add in a penalty term that just says if a 0 precedes a 1 in a clock, it's not a sensible clock state. And I want to just outlaw it all together. So I have these forbidden states. So this term just basically says I'm going to slam it with an energy penalty if it's an invalid clock state. So if I now look at, I can think of every standard basis state for the clock as a clock state, the valid clock states form this nice path. So I have an edge between them if applying the propagation term advances the clock by 1. And I have this mishmash of invalid clock states. But all of them, so this is what my matrix looks like. It's block diagonal. The valid clock states are closed on the proper. The propagation term is invariant on this base and invariant on this base. And this is now what the big matrix looks like. I have all ones in the diagonal. So there's no way that that's going to affect what the smallest eigenvalue of my matrix is. And then the rest of this is our nice propagation term that we analyzed previously. Perfect timing. Why not a 2 valid clock or 2 qubit clock? Because I'm just going from at this boundary from 1 to 0. So I could potentially do that. Anyone see why that might be a problem? Yeah? Yes. So I have to have that first term would be says, I'm in the all, if that first qubit is 0, yeah, it goes to 1. So anyone see what's wrong with this? Yeah? Yes. I think you've got it right. Now I can apply this term. I have the forward and the reverse. And I could potentially apply the reverse somewhere in the middle, in which case would get me this state here, which is not a sensible clock state. So if I use 2 qubit terms, I don't have this nice containment of all the valid clock states are closed under the propagation term and otherwise. So I could potentially apply the propagation term and end up outside of a valid clock state. But the 3 local, let's see if we can go back to that. The 3 local, make sure that doesn't happen. It makes sure that it's applied only at the boundary. So if I'm applying it in reverse, it can only be applied at the boundary between the 1s and the 0s. There we go. All right. Good. Thanks. Any other questions? Yes.