 Okay, so Lecture 5, Stochastic Hamiltonians, I'll define them, we'll motivate them, we'll talk about some of the complexity. And like last yesterday's lecture, this is a special case that has some classical features to it and maybe is a little bit easier than the full blown quantum version of the local Hamiltonian problem. Okay, so let's start with a definition. So we have a Hamiltonian and we're going to specify it in the standard basis. It doesn't really have to be the standard basis, any basis we'll do, but it's by default we typically use the standard basis. And the definition of stochastic is that the Hamiltonian is stochastic if all the off-diagonal entries are less than or equal to zero. What's that? There's a late bus. Oh, okay, okay, so made a second, okay. And now this mic is flashing red, which is a bad sign, so maybe I should just change it out. All right, that works. Should we proceed? Okay, so definition of stochastic is that all the off-diagonals are greater than or equal to zero. We're going to actually consider a stronger version of it in which each individual local Hamiltonian term has the stochastic property. And as I understand it, there are examples where the global is, but the terms are not necessarily. Is that right, Barbara? Okay. But we will assume the stronger definition, and there was a very good question about why stochastic. This is on the later slide, but I think it's an important thing to say right away, is that the thing about, and we will prove a version of this, is that the ground states have non-negative, the ground states are non-negative, meaning that the amplitudes are all real and non-negative. So they look more like probability distributions, hence the stochastic nature, but their quantum, so stochastic. Just some motivating examples, we've already seen some stochastic Hamiltonians, the H propagation term from the circuit to Hamiltonian construction is a stochastic Hamiltonian, and the Laplacid of matrixes in general random walks are stochastic in nature. We've talked about adiabatic computation, and if my end, my final Hamiltonian, if I'm using adiabatic computation to solve a classical optimization problem, the final Hamiltonian is just diagonal. And if I start with a Hamiltonian, which starts in the all plus states, and I evolve over time, for all points of time in that adiabatic evolution, the Hamiltonian is stochastic. So that's another sort of important, and we'll actually talk about adiabatic evolution in connection with stochastic Hamiltonians as we go. So this is another situation, and then many Hamiltonians from physics have this property. It comes with a small local change of basis, and one of the exercises you have is to actually do that local change of basis and take a Hamiltonian that appears not to be stochastic and represent it in a basis in which it is stochastic. So it is well motivated from the physics point of view. Okay, so the general setup here is we have general quantum Hamiltonians, we have the classical on the inside and stochastic sits somewhere in between, much like the community case which is sort of in between classical and quantum. And we'll show a version of this, is that the ground states of stochastic Hamiltonians are what we call, and I will call these states non-negative if the amplitudes are real and greater than or equal to zero. Just a little bit now, I'm just going to do a little bit of survey, give you the lay of the land, and we'll take a little deeper dive into a couple of these results as we go. Just to do a side-by-side comparison of general quantum Hamiltonians and the stochastic version, if we have the local Hamiltonian, and we phrase it as a decision problem to determine if the ground energy is at most A or at least B, and typically we want that gap to be one over a poly, as we've seen already in great detail, that problem for general Hamiltonians is QMA complete, and for stochastic, it's complete for this funny class stochastic MA, which I'll define on the next slide. And then the frustration-free case is particularly interesting here, because there seems to be sort of a difference between the frustration-free. So frustration-free for the general Hamiltonian problem is what we call QMA 1 complete, and all the 1 means is that the verifier accepts yes instances with probably 1. So if you know the ground energy is supposed to be 0, and you're handed a state, you'll measure that the energy is indeed 0, and there's no probably of error there. And this is the result we'll sort of talk about more in depth, is that on the stochastic side, if we're looking at frustration-free Hamiltonians, this is complete for MA, okay? And this was the first MA complete problem, which is sort of significant, that sort of this comes from the quantum world, that sort of a natural complexity class. The first example of a complete problem comes from the quantum complexity. Okay, so we've seen Merlin Arthur. Merlin Arthur is our sort of classical verifier, so if an instance is a yes instance, there's a string that causes a classical probabilistic verifier to accept with probably at least two-thirds, and if it's no for every witness, the probability of acceptance is at most one-third. So MA contains NP because a probabilistic verifier is a special case of a deterministic one. And then stochastic MA is the same except that the verifier can have this sort of funny quantum-like format to it. So this is what the, and it's sort of a generalization of a probabilistic verifier. So the verifier computation is a classical reversible circuit. I can have my classical input, my classical witness, and I can also have a bunch of plus states. Did I get this right? The witness has to be quantum. The witness has to be quantum, wait, wait, sorry. And then we execute a classical reversible circuit. Right, because it's gonna be, in general, entangled. And then we can measure in the head of mind basis. And so the general stochastic Hamiltonian is complete for the special class. Now, I think it looks to be all tied up. I think there's some question, maybe whether these two are perhaps equal. I don't think any big complexity assumptions would be violated if that were to be shown. So there's not some big complexity barrier to show that there's equality here. But this is generally open, okay? But we do have sort of characterizations of each version and completeness for it. Okay, yes. How much quantumness in the witness helps us? I think there's also quantumness in this verifier. That's an important, it's not just the fact that the input is. The measurement is sort of the key part. Yeah, right, okay. So it's good we have the experts in the audience this morning. Yeah, it is very funny. It would be nice to understand it better and to sort of have a better sort of more natural characterization for it. So on the numerical side, physicists have been working with stochastic Hamiltonians quite a while for general quantum Hamiltonians. Sort of the best classical, there are lots of tensor network techniques and everything but for in terms of provable algorithms, if brute force the diagonalization can handle 20 to 30 qubits. On the other hand, there's a very nice quantum Monte Carlo algorithm that can be applied to stochastic Hamiltonians. And this has been used for a long time in numerical condensed matter physics. And the nice thing is, this is what they call it sign free. So the ground state that they're looking for doesn't have signs in it. So you can implement this with a random walk. So I'm not gonna cover this method, but just numerically, there's a lot more that you can do with the stochastic ones that have been very successful. There's not a lot in terms of proofs of convergence. But heuristically, these have been handled quite well for larger systems. And then adiabatic, I will discuss this a little bit more. We talked about, I guess it was Tuesday, Wednesday. The fact that the adiabatic model for quantum computing is equal to the circuit model, okay? On the stochastic side, and we'll discuss this today, for the special case of frustration free stochastic Hamiltonians, these can be simulated classically. So if there is an adiabatic path, you don't need a quantum computer. You can just use your classical computer to simulate that evolution, okay? We don't always understand when the adiabatic path exists. But if it does exist, then it can be simulated classically, yes? Yeah, if you're using a numerical method, say quantum autocollow, how are you sure that you're actually in the ground state? Well, frustration free, I think you can just empirically measure. Now, I don't think you ever really know numerically whether you've hit the ground state, you just get sort of papers come out. We got a lower energy on this Hamiltonian model. So I think there's no guarantees necessarily. Maybe there, I think there is sort of an interplay though, between the numerics and the experimental physicists. So some of these things can be sort of experimentally verified in the lab. So, okay? So let's look at sort of a simpler version of this claim. So suppose I have a stochastic H, then we're just gonna prove a simple version and I'm gonna quote sort of the more general version. There exists some state that's non-negative, that's the ground state of my Hamiltonian, okay? So just stuffing through proof, we'll start with any ground state. So I'm expressing it here in the standard basis. And now I'm gonna define this new state, which is the same state, but I'm just now taking the magnitudes of the complex amplitudes instead. And what we're gonna show is that the energy of my new state is at most the energy of the first state I started with, okay? So if you just kind of multiply this out, you get the diagonal terms, you get the off diagonal terms. And the key observation here is that this X, H, Y is at most zero, okay? So if we can lower bound the product of these magnitudes by the complex conjugate times the number, you pick up a sign here of the angle between the two complex numbers when you multiply it out. This is gonna be, this value is gonna be a lower bound for this value, but since this multiplier is non-negative, the resulting energy is gonna be at least as large for Psi as it is for, at least as large for Phi as it is for Psi. And then this is exactly the energy for Phi. So any questions about this little, and so the key step here is this substitution. And I didn't multiply out the, but if you pick up basically a cosine term here, and so this makes this value less than or equal to that value, and since this is a non-negative multiplier, the resulting energy will be at least as large, okay? Questions? It's a little early in the morning to think that through. Sorry, the last day. Yes, yeah, but it's symmetric. So it cancels out, the imaginary part cancels out. So if I'm assuming that it's gonna be symmetric, I have a yx term, and so then the imaginary part cancels out, all right? And that's where you pick up, so then what's left is just the cosine between the two, so. All right, the Peron-Frobinius theorem sort of gives sort of a sharper and that picture of what's actually going on is that if H is reducible, meaning that so it's, there's a single, it's block. I want more than block diagonal. I want a single block here for reducible. I want it all to be connected. I just realized maybe this is not quite, yeah. So reducible means that my, it's all sort of connected to each other, and the resulting walk is connected. Oh, I want reducible to be a single block. No, irreducible to be a single block, it's early in the morning for me too, and reducible to be block diagonal. So if H is irreducible, meaning that everything's connected, then there's a single ground state, and that's not negative. If it's reducible, what I get is an orthonormal basis of these things. So each block corresponds to a different ground state. Each of these ground states is non-negative, okay? And we'll use this quite frequently throughout, okay? So the, in 2008, Sergey and Barbara came up with this very beautiful Markov chain that can be applied to, that converges to the ground state, okay? So remember, our ground states are really kind of probability distributions. So we're going to be looking at Markov chains that eventually sort of converge to this ground state distribution, okay? So I'm going to start with my Hamiltonian. And I'm just going to adjust it a little bit so that it's sort of more convenient to work with. We've talked about the fact that we can scale the terms so that they're norm at most one, then they're non-negative. And this gives me that the norm of my entire Hamiltonian is bounded by M. And I'm also going to offset it so that the energy is equal to zero. Now, you may wonder already, isn't energy sort of hard to find? This is going to be the least of our problems, right? As I will define it, the Markov chain is not easy to implement, but we'll show that in the special case of frustration free, that it can be implemented. So, and in this case of frustration free, E will be zero, okay? But for now, for convenience, I'm going to assume that I've sort of shifted my Hamiltonian so that it's now the ground state is zero. So the norm of H sits in between zero and M, and the ground energy is zero. So here's the definition of the random walk. I'm taking this G to be I minus H over 2M, and this guarantees me that the norm of G sits in between one and one-half. Because M is between zero and M, and I'm subtracting off from the identity. And so the random walk is defined as follows. I have some standard basis string X, and the probability of moving to Y is the following expression, where psi is a ground state of the Hamiltonian, okay? So it's the ratio of the amplitude of Y to the amplitude of X, multiplied by this cross diagonal term of G. And this is guaranteed to be non-negative, okay? So I don't think I put up here that I'm assuming that psi is a ground state. And again, I don't necessarily know how to implement this thing because I don't have a ground state in my hand, but we're just sort of abstractly defining a Markov chain at this point, okay? So we're gonna just step through the proof that this is indeed a well-defined Markov chain, and then we'll talk about why it's easier to implement in the frustration-free case. You have a question I can tell on your face. Oh, yeah, that's my handwriting, that's my, okay. So this is psi, a ground state. And I think those are the only, this is a psi too, but this is a string. This is the amplitude of Y in psi. This is the amplitude of X in psi. And this is just the off diagonal term of G between Y and X, okay? Thanks. Note to self, I have to make that clear or so. Okay, so the probability of transitioning from X to Y, we're gonna be defined to be this value, and this defines a Markov chain on the support of psi, okay? So first, let's establish that it's non-negative. If Y and X are both in the support of psi, then they're greater than zero. And we're assuming, we're using the fact here that this is a non-negative state. And this is now an off diagonal term, the probability of transitioning from X, Y to X. So actually, we'll cover both cases where Y is equal to X or they're unequal. So I have a delta X, Y from the identity, and then I have this off diagonal term divided by 2 over M. If X is not equal to Y, my off diagonal term is less than or equal to zero. If X is equal to Y, then the off diagonal term is less than or equal to M. And in either case, this is gonna be a non-negative number, okay? Between 0 and 1. So it's a well-defined probability. Now let's talk about the fact that it's a well-defined Markov chain. So given, if I'm at X, the sum of the probabilities of the strings that I might move to have to sum to 1, okay, for all X. So the probability, I'm summing up over the probabilities of Y that I would move to. And so now, I'm just gonna insert my definition of the probability of transition. And I've just moved the summation around a little bit. I've moved the Ys to the inside, and this becomes the identity then. Cuz I'm summing up over all Y. And so what I have in the numerator is just psi G inner product with X. And using the fact that the ground energy of H is 0. So if I have a ground state psi and I'm applying G, I just get it's an eigenvector with eigenvalue 1. And so I can replace this with the amplitude of X. And I just get 1 at the end, okay? No, I don't think so, because G applied to psi is just psi, yeah, here? So this is just the, if I bring the summation on the inside, I just get the identity, okay? All right, okay, so it turns out that the unique limiting distribution of this Markov chain, now we have the squared, or the amplitude squared, okay? And this just follows from the fact, we started assuming that H is irreducible, and it satisfies, it's not too hard to check that it actually satisfies detailed balance in this. So the probability of moving in one direction along an edge is the same as going backwards. And if it's not irreducible, then we can take our non-negative basis, and the walk is the support on each of these individual sets. So if I have a non-negative basis, and it's orthonormal, the supports are disjoint. And this random walk now, or this Markov chain, is defined on each separate connected component. And so each of these states defines a connected component, and it's a random walk on that, all right? So how does this Markov chain converge? We're just gonna pull out some sort of standard results from sort of Markov chain mixing theory. So if I'm interested in starting at string Z, and I'm interested in walking until I'm within epsilon of the stationary distribution, that's gonna be the smallest number, such that when I apply my random walk T times, I'm within epsilon of the true stationary distribution. And a standard mixing time bound is that this mixing time is bounded by the following expression. So pi of Z is the amplitude squared of my starting state. And lambda 2 is the second largest eigenvalue of the Markov chain. So what are these two? So what is lambda 2? Let's talk about that first. So the claim is that lambda 2 is just the gap of H over 2m. So if I have a 1 over poly gap in my Hamiltonian, I'm gonna get 1 over poly downstairs here, okay? And this just comes from sort of plugging in the definition of the probability distribution and observing that this probably distribution, I'm taking g and I'm multiplying on either side by the diagonal, the diagonal inverse, where this is just the amplitude of x in the state psi. And since g and p are similar, they have the same eigenvalues, and the eigenvalue of g is just gonna be 1 minus lambda over 2m, okay? So the upshot is that the mixing time is gonna be 2 over the gap of H times the log of this term. So what do I need for this random walk to converge in polynomial time? Well, I need a good starting place. So I need a starting place whose overlap with the two ground state is at least 2 to the minus poly, so exponentially small is okay. And I also need the gap to be 1 over poly. And in that case, this mixing time, this Markov chain, which I can't quite implement anyway, is at least converges in polynomial time, okay? Okay, caveat, a mixing time, this Markov chain requires knowing the probability distributions, obviously, if you're gonna implement it. And it requires knowing these overlaps and we don't even know the ground state, okay? And the beautiful thing that Sergei and Barbara showed is that in the case of frustration-free stochastic Hamiltonians, this ratio can be efficiently computed, okay? And I'm gonna sort of walk you through why that's the case. So in general, we don't know how to implement this Markov chain in general. It's a well-defined Markov chain, but we can use the frustration-free aspect to implement it. Okay, so I'm gonna start with some ground state of my Hamiltonian. And I'm gonna start with some x, which has non-zero overlap with that ground state, okay? And let's say my random walk is sitting at string x. And I wanna compute now these probability distributions, the transition probabilities moving out of x. How do I do that? Well, the initial definition of the Markov chain requires knowing this ratio, which we, in general, we might not know. But we're gonna make a couple of observations. So if the probability of transition is greater than zero, then it has to be true that this off-diagonal term is greater than zero. Because we have that as a multiplier in there. And just by definition of g, that means that the off-diagonal terms has to be less than zero, strictly less than zero. So what that means is that these two things are sort of connected in the random walk, okay? And therefore, for at least one of the terms, one of the individual terms in my Hamiltonian, the off-diagonal term has to be less than zero, okay? And this is now starting to look something that I can locally check, because it has to do with the individual terms and not the global Hamiltonian, okay? So the claim is that if h is stochastic and frustration free, and I have an x is connected to y by this little local term, then this ratio that looked difficult to compute can actually be computed by something about the local term. So I don't need to understand my global Hamiltonian in order to be able to do this. I just need to identify a single term where the off-diagonal is strictly negative. And then for that particular term, I can look at the diagonal entries of y and x and compute this term. So this is now looking like something that is feasible to compute. I don't need to know the global properties of the ground state. And the nice thing is that for a given x, there's at most polynomial many of these. I can look through all of the terms and each term is going to give me a constant number of strings that I can then walk to. So this is sort of now modulo proving this claim. This is now a random walk that I could potentially implement. I need a term where, so I need a term that has non-negative. And if I know that the probability of moving to y is non-negative, if I have a positive probability of moving to y, that means that there has to be at least one term where the off-diagonal is negative. Yes. I'm not going to go through the proof in excruciating detail. I'm just going to wave my hands about it a little bit. It's fairly intuitive to see. And so here's the intuition. So pi is the projector onto the ground space of the big Hamiltonian, not a single term. And remember, I have some non-negative orthonormal basis, and I can express pi as sort of some of 1D projectors. Now let's take a look at sort of what that matrix looks like. So since they're non-negative and orthonormal, that means the support is disjoint. So if I look at this projector in sort of matrix form, it's block diagonal. I have a block for each one of these states in my orthonormal basis. The support is disjoint, so one of these terms is going to be a 1D projector. And it's defined on the support of that state. So far, so good. Now the ground space of my Hamiltonian can be any superposition of these psi a's. But if it's in the ground space somewhere, if I have two strings that sit inside the support of a single state, that means the ratio of their amplitudes is fixed. So I can certainly within this ground space vary the weight on this block versus that block versus that block. But for two given strings inside the same state, the ratio of those two strings is fixed once and for all by virtue of the fact that it's sitting inside here, and that's dictated by psi a. OK? This is sort of an important point. So let's say if I look at just psi a by itself, what's the ratio of the amplitude of x to the amplitude of y? So this is just the ratio of the diagonal terms of x and y. Now I can certainly have a ground state that's different superpositions of these different states, but the ratio of x and y doesn't change. So if I have some state that's sitting inside the ground space, and I know that the, this is just another way of saying exactly the same thing, and I know that the amplitude of x is non-zero in the state, and I know also that x and y sit inside the same block. That's what this says here. Then the ratio of their amplitudes in this psi is equal to the projection now of x using the big projector. So this is fixed and independent of which psi I happen to pick. OK? Are we good? Does anyone have any questions about that? OK. Good. Then since it's frustration free, I can make that exact same observation about every single individual term as well. So because psi has to sit inside the ground state of each individual term, if I look at the projector for an individual term, and I apply it to psi, I get the same state back. And this is the frustration free property. And if in this individual term, x and y are connected, meaning that the off diagonal term is strictly less than 0, then the projector, x and y, lie in the same connected component for that individual term. And for that individual term, the ratio of x and y is fixed. OK? So now I have all these terms that are dictating the ratio of x and y. And they have to be consistent with the global ratio. OK? Yes. I said, OK, so maybe I misspoke. The ratio of x and y within the same block is fixed. Within different blocks, yeah, you can vary depending on which state you happen to pick. But if they are inside the same block, then the ratio is fixed. And this is saying that they're inside the same block. OK? OK, so this is just kind of hand-wavy argument. The beauty of the frustration free case is that the individual terms are sort of fixing the ratio of these amplitudes. And so I only have to look at these sort of constant size terms to determine this ratio. OK? Questions? Yeah. Actually, if you think about it, if I'm taking a projector with these blocks, they're going to have no zeros inside them. Yeah, good point. You're awake? That's good to know. All right, great. So this is something that can be computed by a classical computer. So we have now that this random walk that converges to the stationary distribution corresponding to the ground state can be implemented by a classical computer. It converges if I have a good starting point and my Hamiltonian has a one over poly gap. That's where we are. OK? And they use this to show that if there's an adiabatic path, then it can be classically simulated. I'm not going to go through all the details. I'm just going to kind of state the result. So if I start at, if it's easy to find the ground state of the initial Hamiltonian, that's always the condition of adiabatic computation. And as I'm going from the initial Hamiltonian to the final Hamiltonian, if I remain stochastic and frustration free the entire time, that's this condition here. And the gap is always one over poly n. And the path has to be relatively smooth. That's usually a condition in adiabatic. Then a classical algorithm can approximately sample from this distribution. This is the final Hamiltonian as it goes from 0 to 1, or the ground state of the final Hamiltonian. And we can approximately sample. And the idea is that you can discretize your path. And at each point in your path, you can use your current ground state as sort of a warm start for the next one. So there's some work to do in showing that how finally you have to discretize this path in order for the ground state of your current one to be a decent start for the next one. But just noticing what this means is that without the stochastic frustration fee condition, this was universal for quantum computing. So this we could execute. If we could do this for general Hamiltonians, then we have a quantum computer. So this is sort of a stark difference between the two complexity. And an interesting question you might think about is that adiabatic evolution with general local H is equivalent to the quantum circuit model. And is there sort of a natural circuit model corresponding to adiabatic evolution for stochastic, not necessarily frustration free? So if I look at adiabatic evolution, it corresponds very cleanly to the quantum circuit model. The frustration free case is classically simulatable. But what about this sort of in-between where I have stochastic Hamiltonians that aren't necessarily frustration free? Is there some kind of natural circuit model that's equivalent to that adiabatic model? Something to think about. OK, so complexity of the local Hamiltonian for stochastic free. So given, let me just define the problem, I'm given a stochastic local Hamiltonian. And I want to determine, is the ground energy zero or is at least A for some A's, at least one over poly? So this is the local Hamiltonian problem now in the stochastic frustration free case. So it's going to be at least NP hard because boolean satisfiability is a special case. So we're not going to escape that. But a verifier could send you a nice starting position. So that's something a verifier could do. And you could potentially simulate this now random walk. We know how to simulate it. And the question is, how would I verify in that case? So let's imagine instead of an MA protocol in which the verifier sends you a good place to start and you execute this random walk. So one idea that doesn't work is start with an X. So it's sent by the prover. Implement the random walk long enough until you converge. Since it's gapped, we're going to assume we can do that. Measure the energy, and then repeat for accuracy. You could do this potentially to verify yes instances, although it occurred to me now that the promise gap is one over poly, but I don't necessarily know that the spectral gap is one over poly. Yeah, it doesn't necessarily, yeah. Okay, so that was another, and also there's an issue with verifying no instances too. Because we're using these local terms to define this random walk. If it's frustration free, that corresponds to the sensible random walk that we know that converges. But if it's not frustration free, it's not clear what it does. So we need a better verification procedure. So here's the idea is that we can define the set of good strings that have sort of a positive, for each individual term, they have a positive overlap with the projector. And this is certainly a polynomial time checkable condition. This is just term by term, I can check this. And any good string, I mean anything in the ground state is certainly gonna have to satisfy this property. Because it's in the projector for every term. And we're gonna find the bad strings as the opposite of good. So this is our picture here of all strings. We have the bad strings out here. We have a subset of strings that are good. And inside here is the actual support of the ground space. And one of the exercises to do, I've given you a particular Hamiltonian, you'll define the good strings, the support, and so the ground, as ground are strings that are in the support of the true ground state. And this superset is some local checkable condition that I can do, okay? So one observation is that this random walk is actually closed on the ground states. And we sort of saw that before. And so what the adversary or what the prover's gonna do is send you a string. And in the no instances, we wanna show that if you execute this random walk, after a certain amount of time with high probability you're gonna hit a bad string. And you're gonna be able to check that. So in the yes instances, the prover sends you something in here, you walk for a while that's closed, you never hit a bad string. You accept, okay? In the no instances, there's actually, this ground is zero. You're sitting here in the good space and you wanna show that if you execute that random walk for a polynomial number of steps, with high probability you end up in this bad space which you can then check, okay? Again, this is gonna be approved by hand waving in the end. So here's the verification procedure. Prover sends a starting string X to the verifier supposedly in the ground state. Verifier runs the random walk for T steps. And in each step we're gonna check did we hit the bad strings or not. If I hit a bad string I reject outright. And if after T steps I haven't hit a bad string I'm gonna accept. There's a little technical caveat which I'll sort of touch on in the last slide. There's an additional check that we need. And the good thing is that it's always satisfied for yes instances. So I'll hint at what that is. It's not kind of the crux of the matter here. And related to that technical point is the verifier doesn't send just any X. It sends the X that maximizes the overlap with the ground projector. Which supposedly the prover can do, okay? So how big is this T have to be? So remember, I think I used A for this before but now we're using epsilon as our promise gap. So in a no instance, the ground energy of my Hamiltonian is at least epsilon. And so what does that translate to in terms of G? Here's G, it's I minus H over two M. It means the maximum eigenvalue of G is at most one minus epsilon over two M, okay? The upshot is that it essentially means that this Markov chain is rapidly mixing. So I'm gonna hit all of the states with reasonably high probability within a polynomial number of steps. And it's gonna use this upper bound on the maximum eigenvalue, okay? So it's essentially it's an expander graph. I walk for a polynomial steps, I hit everything including a bad string if I'm connected to one, okay? So that's the short version of the argument. But if I start with something in X good, what's the likelihood that I stay within that set the entire time? I can imagine all possible sequences of good strings and I'm summing up over the probability that I follow that sequence of strings. Now if we unfold this probability, I get this ratio. This is exactly sort of how we've defined the random walk, okay? Now the glitch that required the technicality is that I'm gonna drop this term here in the next step, okay? Now let's just let me just sort of justify a little bit why I can do this. Notice I'm taking this product, okay? So if it's a yes instance, the product of all of these terms I get this sort of telescoping product and at the end I get the overlap of the final state with the overlap of the initial state, okay? If the adversary sent me the string that maximizes this overlap, that's gonna be less than at most one, okay? And that's a checkable condition, okay? And we're gonna always check that this ratio, the product of this ratio is at most one, which allows me then to sort of just drop these terms altogether. So I get this big product of terms and I'm gonna sort of drop at the end all of these terms, but I'm gonna verify that the product of all of these things at the very end is at most one. And that's a checkable condition. It's calculatable because I know my probability and I can calculate this so I can calculate that ratio. It's kind of a technical point, but basically modulo that check, the sequence, the probability of executing this sequence is upper bounded by this probability of all these inner products. And now I just use the fact that the maximum eigenvalue of G is at most one minus epsilon over two M. I'm multiplying times all the strings, but that's okay. As long as T is polynomial and as long as the gap is one over a poly, there's gonna be some polynomial T, which I can drive this thing down exponentially small. Okay? All right. So that basically says in the no instances with high probability I'm gonna hit a bad string. So yes, if epsilon is one over a poly, this will be exponentially small with T that's polynomial in the size of the, okay. Just mentioned some follow on work, some very nice paper by Dorit and Alex. So what we've shown here is that the hardness of the stochastic local Hamiltonian by the, oh, as an aside, we've showed containment. I didn't show the hardness, the hardness was, and it's actually achieved for situations where the ground state is a uniform superposition over a set of strings, okay. And Dorit and Alex showed that they can actually de-randomize this process when the Hamiltonian has a constant size gap in the case where the ground state is a uniform superposition, okay. So for this special case of Hamiltonians where the ground state is a uniform superposition, if I have a big gap, then I can completely de-randomize this process. And the upshot is that the path from the good string to the bad strings is not that long. So you can basically exhaustively search the entire neighborhood classically instead of following a random walk, okay. And it has a nice interesting complexity sort of implication which means that if you have gap amplification for this special class of uniform, frustration-free stochastic local Hamiltonians, then that basically collapses MA into NP because that process then can be de-randomized. So if I can take a generic uniform, frustration-free stochastic local Hamiltonian that's a mouthful, and if I can boost the gap, then I can all of a sudden have a classical verification procedure which will put MA into NP. All right, that's it. Any other questions? Feel free to chime in if you've got interesting open problems related to stochastic, the stochastic world. Anything, yes, questions? Promise gap, promise gap, yeah. No, it's the spectral gap. That's, yeah, I think promise gap, sorry, yes. Promise gap, sorry, yeah. So it's not a gap to Hamiltonian. It's if you have an instance which is hard and gapped, where the promise gap is constant, say. Yes, yeah, I think you don't have to because we know that if I look at the special case of uniform ground states, that's still hard for MA. So that's why they're not necessarily equivalent but in terms of the complexity, it is hard still for that special case, yes. Question? Uh-huh. Whether it can be highly entangled? I think that the question has to do with whether these ground states of stochastic Hamiltonians can be highly entangled or not. Do you know? Yeah, yeah. The tour code ground state isn't maximally entangled. I mean, yeah, you get global entanglement, is it? Yeah, yeah. Is it, yeah. All right, you had a question? Like the area law stuff or fancier ground state projectors? Would you get there faster? It's, so the, observe that the G operator looks like an approximate ground state projector, essentially. And the question is if you used a more sophisticated ground state projector, would it say speed up the convergence in some way of the random walk? Yeah. I mean, it's not gonna help you compute the Markov chain in the frustrated, you still have this ratio term that's problematic, but maybe there's potential that different terms there could speed up that convergence in some way, because the convergence was related to that term. So, yeah, you might be able to get sort of a better eigenvalue there, so yeah. Yeah, yeah. I mean, you could approve the polynomial in some way, so yeah. Question? Like preparing Gibbs states or in general, or, which is related to preparing Gibbs states, in AM, yeah. For stochastic? And for general stochastic? Okay, okay. Okay. Yeah, I didn't actually include that in my, yeah. So, acoustic MA is contained in AM, which is contained in QMA, yeah. Yeah. Uh-huh. Right, that was actually the goal. Yeah. Yeah, and that was the D-wave model, right? Was there, initially, that they were finding ground states of stochastic? Yeah, but not frustration, great. Yeah, with the frustrated stochastic, yeah. Any other questions? All right, great, thank you.