 So one correction about the one comment that I made last hour that I said the reverse measurement was only a tool to understand the unitary transformation that would disentangle some complicated underlying code states. Actually it is not that, it's not just a tool, conceptual tool I mean. It is used in the medical surgery techniques. Whenever you wanna expand the code state to a larger patch to do some other logical operation then these reversible measurements are used. Although the main reversal measurement is only I think introduced in a recent paper of mine that the technique had existed for longer hours. Right, so this hour we're gonna talk about the circuit complexity of particular states. So what do you wanna say eventually? We wanna say that a particular state, some state is hard to generate. That's probably what large fraction of complexity theory is about. That, well not exactly because in the usual sense you wanna solve a class of problems. Here I'm asking more specific one. There is a target state, there's a specific starting point and you wanna count the number of gates that leads you to that target state. By counting argument this is usually hard. If you consider the Boolean function you have to transform and achieve certain Boolean function then the number of gates you have to implement to express that function should be very large. And similarly if you give me a generic hard random states then almost all of them must have a very large circuit complexity just by counting because the small number of gates can only express so many states. Here we wanna answer a more difficult question. The target is fixed. You are not allowed to consider any ensemble and I have to show that it is still hard. And for some scenario it is possible to show that such a transition is difficult. And to that end I'm gonna talk about the basic light cone and the corresponding Leib Robinson bounds and then I will apply those to the correlation function, code distance and I will last like 20 minutes or so I wanna introduce a new technique to where the previous one argument just fails but this one succeeds to show the hardness. Okay, let's start with the light cone. I'm pretty sure you already know but for the sake of completeness of this lectures let me just mention that. Yeah, so imagine a one dimensional line. So as before all I'm talking about is geometric local setting. So you have some qubits and my quantum circuits will have a diagram that would look like this. So there is a depth two quantum circuit consisting of two qubit gates. And I think of evolution of an observable say supported here in the Heisenberg picture you don't evolve the state, you evolve the operator by conjugation. So I consider this as my overall circuit U and then my U dagger will be drawn like this in the second layer. Then since they are all unitary, this unitary gate and that unitary gate are precisely inverse of each other so I can erase them. This one I don't know because there is observable I wanna probe and I wanna see the evolution of that. This one, they are gone. And once I erase that then I can erase further like this one I cannot erase because that's something is going on because of this gate but those on they are all gone. So what you're left with is the light cone for an observable emanating from the spatial location and the final width is depending proportional to the depth of the quantum circuit. So this part is called the light cone anything any argument that is based on this kind of picture is will be called a light cone argument. It appears here and there sometimes. Why is it physically meaningful? The world doesn't seem to be implementing any unitary gate that computer science think of as a discrete things. Unitary evolution comes from the solution of a Schrodinger equation whose solution would look like this. There's unitary I times time and the Hamiltonian there. Here the Hamiltonian is just nothing but the sum of small Hermitian operators. Here I say small but I really mean that each term is supported on some small number of qubits. So the diameter of the support of an operator is going to be some small number 01 independent of the system size. And you take the sum of those and we call it Hamiltonian. In the Heisenberg picture this is not the time evolution operator. In Heisenberg picture the proper thing is to think about this. The x is conjugated by these operators. And at the problem session you will show that this has an expansion in t. This is an, I call it nested commutator, sometimes called iterated commutator. Definition is, so you take a commutator with respect to the h once more as I increase the subscript k. And if the k equals zero then it's nothing but just x. So let's look at this expression a little bit more carefully. So here I give you a lemma. Time permitting I'm gonna explain some elements of the proof. Oh, let me explain the assumptions first. So I always consider Euclidean lattice where the geometric metric is as you think in the Euclidean lattice. And I also assume this that the support side is not too big. In addition I also assume that the norm the maximum eigenvalue of this permission local operator is bounded, just one. Physically these numbers that's the energy scale of the problem and I just normalize it. Then the lemma says that the thing I wanna say is that so there exists, so the formal statement is that there exists a constant c and zeta, some positive number, real number, such that as long as the Hamiltonian satisfies this norm condition and the support condition, then the nested commutator has norm bounded by the exponential function in k. There are some extra vector that might look interesting. There's a vector of a support in x which counts the number of sites in the support of x. And of course this is linear in x so there must be the norm on x too. So this is a lemma. But before we talk about any proof of this lemma, let's make a connection. The goal of the lemma is to make a connection between this discrete unitary circuit versus the Heisenberg Hamiltonian evolution. No, it's just a positive constant. Zeta maybe greater than one even. Doesn't matter. As long as it's a constant, it's fine. For our purpose I meant. So let's look at this expression. So the lemma says this much is bounded by an exponential function in k for everything fixed. This one is also exponential in k in magnitude. So everything is exponential. It's a geometrical series. And the convergence is guaranteed at least if t is smaller than say the zeta there in magnitude. But it's an interesting convergence because formally the norm of h is enormous. It's unbounded as it grows with the system size. It's proportional to the system size actually for any meaningful Hamiltonian. Yet this series is converging for sufficient small t for some, the region of convergence is non-zero. But what does that mean? If it is converging, then let's spell out the term by term. The first term k equals zero. Well this add is absent. Commutator at the zero, that's x. So that's x. Well that's yes. At the zero throw order in t, the x should come out. Next term will be, the first term is t i t commutator of x. But what is it? So suppose I have some two-dimensional grid that is my model for this d-dimensional Euclidean space and I have my x here. Hamiltonian terms will be distributed around here and there, right? And I take a commutator. X and commutator, well commutator between x and the far distant Hamiltonian term, that's zero. Because, so I don't have to worry about that. Then all that matters will be supported around x. How large is it? Right, this number. Let me call this r. So this distance is r. The next term here, the next term, will be t square i square two, I don't care really, and that's the second commutator. The second commutator means that, well, I take a first commutator and then I consider commutator with all the Hamiltonian terms. So the second term will be supported here. Well, now you see the pattern. So as you take the next and next term, three r and so on, but what does the lemma say? Lemma says that, well, I don't really sure because of the c, we're not really sure about the norm for a small k, but at some point, at some finite k, it must decrease exponentially. So in picture it's like you have a very weak term here, slightly stronger term here, even stronger term here, and the ultimate strong term at the center. So you have expressed the time evolved operator, this, into a sum of strictly local, strictly finitely supported operators, in an increasing diameter, and as you increase the diameter, the strength is weakening weakening. So if you were to probe that time evolved operator using some other observable, formally, I'm considering this, so let y be prober, the probe of how this operator is behaving. So the most straightforward is to take a commentator and look at at least the magnitude of that commentator. Well, from this picture, if y is supported far away, then only thing that can affect is the, not the first few terms, but the very latter terms. And what's the strength of that? It's going to be some, I don't know, some constant times k, where k is the coefficient appearing here. Well, in other words, it's the distance between the support of x and y divided by the range of interaction. And this form, this form is one of the, one of the Leib Robinson bounds for local Hamiltonians. Any questions? Well, it's a metric space. So diameter is, you give me, it's an optimization problem. You give me two points in that sub set and I make the measurement, you make a list of those and take the largest value. So the distance between the furthest apart, two points. Let's sketch the proof of this Leibman. Yeah, now you understand the proof of Leib Robinson bounds if you understood this. So, say the third commutator, maybe second is enough, but let me do it third. It's going to be sum over A, B, C dummy variables of HA, HB, HC, and X by linearity of the commutator, obvious. And when does it non-vanishing? When is it not vanishing? By the argument of, as we've done before, C must be very close to X, B must be very close to X. Now B must be close to the commutator and H must be close to the commutator and so on. So this motivates us to think of a graph based on the intersection of supports. So let's think about the intersection graph. One node corresponds to one term in the Hamiltonian, not the lattice site, just one term. And I declare there's an edge if there's support intersect. It's not a commutation, just a support intersection. Okay, so in the, if I have a say one dimensional easing model, CI, CI plus one, then the graph will be one term corresponds to one point. And since they are overlapping, I would have an edge and they are overlapping, I wouldn't have an edge and so on. So the intersection graph for this easing Hamiltonian, you just one dimensional line. And it's up to you to figure out exactly how the graph would be. But the important point is that this intersection graph on the Euclidean lattice will have a bounded degree. Simply because there is not much room for other too many terms to intersect. And let's look at the one term here. Since the commutator should not vanish. Okay, so for the sake of completeness, let me introduce an auxiliary vertex to this intersection graph corresponding to point X for operator. So the rule is same. If the support of X intersects with some support of other Hamiltonian term, then there's an edge, otherwise there's none. So C should be connected to edge, I'm sorry, X node. HB must be connected to this subgraph right here. And HA must be connected to this subgraph and so on. So if this commutator is non-vanishing, then these terms, these four points in my intersection graph must define a connected subgraph. That's a one mapping, I'm doing a combinatorics. So that's one mapping from this tuple of three Hamiltonian terms into a subgraph of intersection graph. What about the converse? If you give me a connected subgraph in this intersection graph, including the base node X, then I should be able to write down some commutator that is potentially non-vanishing. But the mapping from here to here is not one to one, simply because when I do this, I completely forget about the ordering. Right? So this map is at most k factorial to one. It's not that many. If there are collision, I mean, if the two terms are the same, then the number of nodes in this will be reduced, then you will have to weight it by some integer. So in general, you are defining a connected multi-set in this intersection graph. And the multiplicity will be absorbed by this k factorial factor, so I don't have to worry too much. All I have to do is to count the number of connected multi-set, including X, in this bounded degree graph. And that is purely combinatorial problem. And the solution is known that the number of such things is exponential in the number of nodes. Or more precisely, the sum of multiplicities. And that constant is precisely this zeta, okay? Now you know the one, I believe, Robertson bound on Euclidean lattice. You can generalize a little bit. So all we needed was that this intersection graph was a bounded degree. So I didn't really need the Euclidean space, but as long as the space has a, well, your Hamiltonian has a bounded degree intersection graph that these arguments will work through. Intersection graph is defined by the Hamiltonian itself. Yeah, so, okay, let me just repeat. I want to count the number of summands that are potentially non-zero. One summand has norm bounded by X, and each time you take a combinator, there's a factor of two increasing. So there will be a two to the k. Yeah, so norm of this combinator is at most two to the k, which is exponential, so we're good. Our goal is very generous. We only have to bound it by an exponential function in k. So one term being exponential in large, that's fine. But now the number of terms worries me. So I want to count that. But I don't want, I don't have to count the zero terms because they're zero. So for every potentially non-zero term, I associate a subgraph in the intersection graph. And I count the, I look at the correspondence and I count in the potential non-injectivity. And the final step is to count the multisets over this intersection graph. Okay, let's now apply this to the complexity. The first thing we are looking at is the correlation. You have seen this before, the correlation between two observables, between two regions. If they don't have to be the same type of operator, they can be arbitrary or thing. And the correlation is defined to be O A, O B for a given underlying state row, minus O A row, O B row. That's my correlation function. Let's calculate this for a product state. So my A is somewhere here, my B is somewhere there, and underlying state is a product state. Completely disentangled. This one, it will take just a trace of O A. Well, I'm sorry, not trace, just this. And here, all the same thing. Here, well, it's a tensor product state. So the trace will factorize also and the correlation function will vanish for any product state for separated regions. And now, let's evolve the state under a unitary. So we had our quantum circuit and I'm going to evolve my row under this quantum circuit. Well, instead of evolving row, let me just use the Heisenberg picture again. Then my evolution will take my correlation function, this into, yeah, this is my evolution. But if A and B are far separated, here's my A and here's my B, then their light cone cannot ever intersect. So this is still the product of two. Underline state is still product state and my O A conjugated by U and O B conjugated by U are still separated and by the calculation before the correlation function still vanishes. So if your state has non-trivial correlation for distant two observables, then immediately, we know that by this argument, to reach such a state from a product state that didn't have any correlation, it will take some deep circuit. How did? If you found the two observables that are, that is distance, say, L apart and your gates are, let's say, two qubit local, two qubit geometry local, then for the light cones to overlap, the depth must be proportional to this L. In that particular case, L of virtual or something. So knowing correlation is non-zero, immediately tells you about something about complexity. And distance between two observables gives you the lower bound on the depth. It does not, however, give you a lower bound on the number of gates, it only gives you the bound on the depth. It does give you some bound on the number of gates, of course, but I'm just saying that it's not necessarily the gates must be filled everywhere. The argument only tells you that depth must be somewhat large. And as an exercise, you are asked to find the two observables with a non-trivial correlation for the cast state. Now let's apply to this more code states because the theme is on the local codes. And this morning, Sandy was saying that Tori code is very highly entangled state. And it is highly entangled in the precise sense of this circuit complexity. Let's calculate the correlation function for the Tori code. Remember, Tori code is defined to be the ground state of a Hamiltonian whose terms are local and is commuting. I actually only need the condition that it is that mother Hamiltonian is commuting, nothing else. Oh, I'm gonna use another thing that we have checked. A disk-like region supports no logical operator. So, yeah, this is my region A. And suppose I have another region B. The lattice is everything like I just didn't draw the lattice. That this region is B and that is A. That's, yes. Capital R was the diameter of a one Hamiltonian term and it's uniformly bounded across all terms. So, our interest is in this expression with a condition that my underlying state absorbs the code space projector. That's what it means for the row to be in the ground state. Since my Hamiltonian is unfrustrated and commuting, the code space projector is nothing but the big pie, the product of Hamiltonian terms expressed as a projectors. So, specifically, Hj must be, well, minus Hj must be identity times the star term over two or identity plus the placket term over two. So, interpret, differently, row is invariant under multiplying J on the left and on the right. So, I could insert whatever local projector I want here without changing anything. If Hj is supported near here, far from B side, then this commutes, so I can move it over here. If H was supported near B, then I could send it over and place it there. So, whatever term you give me, I can always bring it near A. So, this piece, well, yeah, this expression becomes pi of OA, pi is the code space ground state projector, pi times OB, row, right? But we have seen this before. A is a small disk-like region, and this is proportional to some complex number depending on the observer itself and pi. But pi by cyclicity of trace will be absorbed to row, so this is going to be that complex number times trace of OB, row. But what is this? This is, I can calculate, well, let's take a trace of this expression. If I take a trace on both sides, by linearity, I can do this. I can absorb it here. Oh, I forgot the row there. The row is gone, the pi is gone. So, this is nothing but the expectation value of OA on the ground space. So, this side is precisely the second term when I was computing the correlation function. So, if A was a small region separated from the B in such a way that the distances between A and B is so large that no single Hamiltonian term could overlap both, then correlation function is always zero. Okay, so I just told you to show that the complexity suffices to find a non-trivial correlation function. Sorry? At the step where I was sending every local projector to A. Okay, so let's bring H here. H will commute with row by this property. So, it could be either side, they're a projector. So, if they are too close, then H could be sitting in between, and I'm not allowed to send it in between A and B. But if they're far apart, then either H is sufficiently close to A so that it trivially commutes with OB, or it is anyway close to A, so it's now sending over here. Okay, yeah, so the correlation function in this case doesn't help. We heard that toward code is non-trivial, but the correlation is exactly zero. Why is it non-trivial? Well, the correlation function is a sufficient condition, so it suffices to find some correlation function, and the proper correlation function we should look at is that of logical operators. On the toward code, there is encoded qubit, so I could measure the expectation of this logical operator on this strip, concretely, it'll be at the product of poly-Z is passing through this particular line and wraps around the torus once. And B is another such operator, but passing through some distant line, like here, okay? But I took this string operator and that string operator is nothing but the product of all plockets in between. If I multiply plockets here all together, then I will be left with the tensor factor leaving on those two lines, and I can do it indefinitely. So the product OA and OB, we know it has expectation value plus one on the ground state, that's the definition of ground state. But individually, it may not be. You could have chosen some underlying ground state in such a way that the logical operator along this line has expectation value of zero. In that case, yeah, this is one and both are zero, so the correlation function is non-zero. And how far can these two be? It's order of linear signs. So creation of that particular ground state whose expectation value of that string operator is zero will take that linear in the system size. But that's not so satisfactory. I mean, I wanna show the complexity of any ground state, not just this one. Can the complexity be go down some way depending on what logical operator you choose? Well, the answer is no. And the reasoning is that you could have chosen not just this one, but at the same time correlation between these two logical operators. And after some manipulation, you can show that either one of them must be large. There's a better argument. So let psi be one of the ground state arbitrary. Then I know that there is some other state that is orthogonal to it, which is also in the code space. And by definition of code distance, in order to distinguish the two put differently, if I wanted to transform this state to that state, then I have to enact some logical operator who supports covers more than L qubits. Now, if psi was generated by some small depth quantum circuit from a product state, then product state, there is always some qubit that is not the same as the given one. Like if every qubit is zero, and any orthogonal state must have some non-zero component somewhere. It may be superposition, but somewhere. So on the product state, there is an observable that will distinguish some any orthogonal state supported on a single site. Put differently, hypothetically, if I roll back these two states to the product state side using my hypothetical generating circuit, then the two states will be distinguished by a local operator, a strictly local operator. By the same, the like on argument, if I evolve it, then the observable depth was distinguished under two will get large as well. But in order to reach the size of order linear system size, the depth must be at least linear two. So I didn't have to specify a particular correlation function. It's essentially the same idea, but slightly more streamlined. Now, let me give you a puzzle. Oh, I only have 15 minutes. We, in the first hour, we show that on the sphere, you cannot have any logical qubit encoded. ToriCode is definable on the sphere, and their homology is zero at the dimension one. So there's a unique state. So all argument here that I used this requires some logical qubit encoded. This required the orthogonal state that is not distinguishable unless the observable is large. So these arguments does not work. Correlation function, arbitrary region, is exactly zero as we have calculated. So what do we do? Do you have to, would you conclude that the ToriCode standard sphere is easy to create? Any guess? Yeah, it's hard. But why hard? How we show that? That's a different thing. So for that, and let me introduce another technique. Let me start with the linear algebra. That there will be a bipartite system. I drew two lines, but you think of it as a bipartite system, say A and B. And I bring two operators. Yeah, two operators, just some generic one. Since A and B have a tensor product structure, I can decompose this into, let me use I and J. P, A, I, tensor, P, J, B. I know you can do some Schmitt decomposition to reduce the sum and so on, but I don't need that anything. Just a very generic expression. Same thing can be done for this side. And now I define, this is infinity symbol, but I read it as a trist product. So it's a symbol. A, B to B, X, Y, I, J, P, A, Q, A, I, Q, B. Well, let me compare this with the ordinary multiplication. So the below is the ordinary multiplication that you would do in usual sense. It's just P appears before Q because P appeared before Q. Twist product follows the same rule for the first component, but reverses the multiplication order in the second. It's left to you as an exercise that it is well defined. In the sense, there's ambiguity because my decomposition of an operator into some of tensor products is not unique. Same as here. Nonetheless, the result must be well defined. The logic is not that different. If you knew that this multiplication made sense, then that multiplication made also sense. Okay, what can I do with that? I'm gonna build up correlation function looking formula using that. But before doing that, let me remark another linear algebra effect. If P, well, P, let P acts on A, M, B, a tripartite system and Q also. But suppose Q did not act on M. Q acted on M by identity. Then I can form the twist products between P and Q by, according to two different composition. I could insert division for A, M, B or A, M, B. And the claim is that whatever you do, the result of the twist product is the same. If you knew the tensor product description, it must be very easy. So this is my operator P and let me use a different color. And Q would be sitting on here. The normal multiplication would do this. That's normal multiplication. Twist product will do this. But this amounts to this diagram and that amounts to that diagram. It's the same thing. P, A, and what? P, A, B. A, M, B. A, M, B, yes. Oh, yeah, yeah, yeah, yeah. But that expansion is highly non-unique. But that's the essence of tensor product. So let's now apply these linear algebra constructions to the geometry local setting, as we always do. I consider specifically for the Tori code, but you can do apply elsewhere. I consider two operators that lies here. Maybe I can use different colors. So these are two string operators for the specificity. Let the white operator be the product of sigma z's along a real loop. And let the red line be the loop along the dual lattice consisting of tensor product of X. So they anti-commute at one point, but since there are two, they are overall commuting. Each of them are a stabilizer. No problem. Now I wanna consider that they're twist product. So my, yeah, this is my. So if you think of the time, well, yeah, time direction, the multiplication order of the operators in this direction coming out of the plane, then P times Q would be like this. The white line stays on top of Q. Twist product will reverse it. Now I said twist product will reverse it, but specifically what decomposition of the system into two parties have I used? I could use this, but I could use like that. Are the two the same? Does that lead to the same twist product? That can be answered by iterating this. How? Let this region be M. Here, P is X by identity, Q is potentially non-trivial. The replace the, interchange the role of P and Q in this lemma. So, you know, so that means I could just reroute my partition like that. And that next step you apply here right now P X on this middle M, but Q X by identity. So you could go around and you can do so on. And inductively you can push down this division line down below. In fact, as long as it avoids the crossing points, it doesn't matter where it's placed. Any question at this point? Okay. Now I wanna exploit this fact that I can choose the division line anywhere to show this. Let U be a small depth quantum circuit. And I wanna claim that this is equal to, so the result of the choice product conjugated by U is obtained by taking a choice product of the evolved operators. Okay, the nice feature about the quantum circuit is that you can do it inductively, layer by layer, or even gate by gate. So let's work with, let's show this proposition for the depth one circuit U. Well, even simply fine. Let's prove this for U consistent with just a single local gate. That will do the job, right? Since there are only finitely many gates that can be relevant for this diagram. If it is here, if U was here, nothing happens. If U was here, okay, fine. I can reroute my division line like that. Then the conjugation by U is acting on a solely one party. So I'm taking care of that one gate. If it is anywhere else, you can apply this pushing boundary here and there to ensure that that gate is only acting on a one party. So this equality is true. Or as long as these crossing points does not intervene with my division line. Okay, so I'm gonna use, I'm gonna calculate this. Oh, oh, yeah. It looks like a correlation function, but now the multiplication in the usual case was an ordinary multiplication. Now it's a particular product, okay? I wanna evolve the state or equivalently, I wanna conjugate the observable in the Heisenberg picture. But because of this identity, conjugation just passes through for individual components. And there's another, oh, yeah. It's not difficult to show that for product states, whatever you choose for P and Q, whatever choice product you choose, this must be an advantage. And product state is so strict that you can actually show this. So it remains only to find the sum P and Q whose choice product correlation function is non-zero. And on the surface, on the Tori code, on the sphere, you can find such an operator. Well, I have drawn this. You just embed everything in a two-dimensional sphere and let these two circles residing in some big portion of that sphere. It does not require anything, but there's two stabilizers that is anti-commuting at a point, but the points are separated. So overall commuting, but after a choice product, it picks up minus sign. So that gives you non-zero correlation function for this, which is different from the product states. Therefore, the state on the sphere is takes large depth, proportional to the linear system signs to generate. Yeah, so just to summarize, we wanted to show the complexity of states by looking at the various correlations, basically. Yeah, oh yeah, there's questions. Can you repeat the question? So you wanna know the reason why? Oh, oh, locally invisible. So yes, right. Well, he spoke of a locally invisible operator which appears in the paper where the strict product was introduced. Basically, just to tell you the definition, locally visible means that if you look at the locally reduced density matrix before and after that operator has acted, but then then you cannot tell the difference, even if the operator has acted. One example is this string operator. Even if you just cut it down, then there will be some violation of the stabilizer at the end of the string, but you cannot tell whether operator has existed or not by looking at only this region. So this is locally invisible. Intuition, well, locally invisible operators are teleporting some top-level charged particles, and this strict product is supposed to capture their mutual rating statistics. But the point of this construction is that you don't have to talk about anything about NES or anything else, just a linear algebra. Yeah, so in that sense, this correlation function is mainly about the intent of the witness. It doesn't tell you how to find it, but once you find it, then you know it is integral. So the overall flow towards the showing complexity of certain states boils down to finding certain correlations. So same thing applies up until, well, the recent result of NRTS. They found a particular correlation in the hidden in the code states. And this is the one instance. We have discussed the three tools, the usual correlation function, the cost-distance arguments, and finally that these two products. I think around time, thanks.