 Okay, let's continue. Today we have, yesterday I hope you enjoyed the Dirac lectures also, and today we have a busy session. In the afternoon, there will be a discussion session, so if you have questions for any of the talks yesterday and today, there will be an opportunity to discuss it. So we are very happy to have her now, to Hartman giving a second talk. We've written up on the board some references that I mentioned yesterday, but didn't write the numbers for, so there they are. And the main result that we got yesterday. So to remind you, we looked at the OPE of two scalar operators when we took those to be both separated and far apart. It's a double limit where we first take them to be null separated and then we take them to be far apart. We argued that in the null limit, the operators of minimal twist are the ones that are most important, and those are precisely TuU and it's U derivatives. So that was then repackaged into this null energy operator. So to emphasize, this is an operator statement. This is something, so this is the great thing about the OPE, is that it's an operator statement that then you can use in correlation functions. So like here, we only had to know, we only had to derive this. The only thing we had to know was the psi psi t three point function and the TT two point function. We only had to know, but now once we have it, we can use this equation inside other correlation functions. So that's the purpose of thinking about it this way in terms of the OPE. So now what we're going to do is change gears for a bit and I want to spend a while just discussing what causality means in quantum field theory. And then we're going to come back and make our way back to the NEC, but I just want to talk about some fundamental things about correlation functions. So what does causality, you have t? So I'm going upward. Usually if you think about what causality means in field theory, you think about commutators. So if you look in the textbook on quantum field theory, causality is the statement that operators commute space-like separation. The reason for that is that a commutator is what allows us to send a signal. So if you imagine the ordinary correlation function is just the statement that two things are correlated at space-like separation, that doesn't mean you can send a signal. Whereas a commutator, if we actually act with an operator here in the sense of adding that operator to the interaction Hamiltonian with some source at this point, we actually do that in the interaction Hamiltonian and then evolve that quantum state under ordinary time evolution, then measure that operator over here, then the outcome of that measurement is a commutator. So that's why the commutator has to vanish. Now this has to be true. So when I write that the commutator vanishes, this is an operator equation. The statement that is an operator equation means that it's true inside correlation functions. So what this really means is that we can insert any number of other operators anywhere we please and it'll be zero. So zero is an operator. So this definition is correct, but we need something a bit more intricate and complete. So that's the first thing I want to describe. I'm going to take the point of view of a quantum field theory being defined in Euclidean signature. You can think either way. You can think of quantum field theories as defined in Lorentzian. According to, say, the Weidmann axioms and then you can go to Euclidean. Or you can think of them as defined in Euclidean and you can go to Lorentzian. The reason I want to think of quantum field theories as defined in Euclidean is because these kinds of things, like the OPE kind of pictures and radial quantization that goes into the OPE, these started out as a Euclidean calculation. Now we did it in Lorentzian, but it's sort of more natural to think about it in Euclidean and then continuing to Lorentzian. So I want to think of a quantum field theory as starting out in Euclidean. And I want to understand what this statement means in terms of Euclidean correlation functions. Can you imagine this is going to be sort of complicated because this is clearly a Lorentzian statement. If you think of your field theory as defined in Euclidean, you have to figure out what this is encoding. So in Euclidean, there's no such thing as commutators, local operators of local operators. So they're commutators at coincident points, but local operators just have all commutators vanish. So if you calculate a correlation function like O of x1, O of x2, dot, dot, dot, this single-valued permutation and variant function of n points on the Euclidean plane. If this is Rd, and I insert a bunch of operators, say this one is x1 and this one is x2, then in Euclidean signature, if I take, in Euclidean signature, there's only one. That's the point. That's what I want to sort of get at. At this point, I don't even have to write. The point is that in Euclidean, I don't even have to tell you because there's only one ordering. And when we go to Lorentzian, that's going to change. In Euclidean, I don't have to tell you how this is ordered. You just do the path integral with the points inserted on the plane. That defines for you some function, and it's unambiguous. The fact that it's unambiguous is the statement that it's single-valued. There are no branch cuts, in other words. If we take this operator and we move it around some other operator, then the function just comes back to itself. I'll give an example in a minute that'll make this more clear. But let me say, in words, that no branch cuts implies commutators. In Euclidean, when we talk about the correlator, we just mean the correlator. There's only one, no orderings. In Lorentzian, we take the Euclidean time, let's call this direction tau, Euclidean time. In Lorentzian, we want to set these Euclidean times equal to imaginary numbers. And when we do that, that involves some analytic continuation. This function was defined for real tau, and we want to analytically continue in tau off the real axis. When you do that, find branch cuts. Those branch cuts means that there's an ambiguity in the analytic continuation. It's like if you try to take log for positive numbers and then you decide you'd like to define the log function for negative numbers, well, you have an ambiguity, and you have to decide how you want to do that, how you want to go past the cut. And that ambiguity is exactly the reason from this Euclidean point of view that operators have to be ordered when you go to the Lorentzian theory. Give an example now, so this will be more clear. These are just words. So the example that's useful to get your mind around this is the two-point function of a CFT. And I'll just keep track of two dimensions, although it's basically the same in higher dimensions. So the two-point function O of tau x O of 0 tau squared plus x squared to the minus delta. So delta is the dimension of O. Tau is Euclidean time, x is space. So this is just the usual scale and variant two-point function. This is the Euclidean one. So you might worry about branch cuts in this expression because there's a non-integer power. But in Euclidean signature, the thing in here is always positive, so you don't have to worry about it. So there are no branch cuts. That's the point. That's what makes this a single-valued function in Euclidean signature. So if we take one of the points and send it around another point, then we don't pick up any phases or anything. We just get back the same function because we never hit that branch cut. But now, if we want to go to Lorentzian, we should set tau to i t. And then O of t comma x, Lorentzian correlator O of t comma x to O of 0 is minus t squared plus x squared to the minus delta. Now you see that we're going to have to worry about branch cuts when we define this function. So if we're to draw what this looks like on the complex tau plane, so the real tau axis is the Euclidean. So this is Euclidean answer and the imaginary tau axis is the Lorentzian answer. Imagine that we fix x to some number, 3, and then we analytically continue in tau. So when we do that, we're going to hit a branch cut when tau is equal to ix according to this formula. So there's a branch point here and here at x and minus x. And we can draw those cuts wherever we want, but it's conventional to put them along the imaginary axes. So that's what this function looks like as a function of complex tau. And when, so if I hand you this function and say tell me the Lorentzian answer, then you should analytically continue from the real axis to the imaginary axis. But clearly if we're past this branch cut, then there are two choices. We could go up around to the right or we could go up around to the left. And those are exactly the two operator orderings of the Lorentzian theory. So if we go up around to the right, is what we call O of tx, O of 0. If we go up around to the left, that's O of 0, O of tx. Since in this simple case we know the actual function, we could just write them down. So if we want to be unambiguous about this expression, then we can just extract the phase. So at time like separation, this one is t squared minus x squared times minus 1 to the minus delta with the appropriate choice of path, which gives you an e to the minus i pi delta. And this one over here is t squared minus x squared to the plus i pi delta minus delta. What did I do? Well, this is squared to the minus delta. No, that's right. Okay, it's also clear from this example that the commutator turns on exactly where you'd expect from causality. The singularity at coincident points in Euclidean signature becomes a singularity on the light cone in Lorentzian signature. And if you look at where this... So the commutator here is given by the discontinuity across the cut. And clearly the commutator turns on exactly when you hit the light cone singularity. So if you imagine that we're in Lorentzian signature and you're moving one of these points upward, then when it hits the light cone of the other point, that's where you have a branch cut and you have to pick. I like to picture it in Lorentzian signature as when you hit one of these light cones, you have to decide which way to go around the light cone. Do you want to come out of the board a little bit and go around, or do you want to go into the board and go around in the back? And those define for you two different functions that are the two different orderings. Now, people get a little tired of drawing these pictures with contours all over the place. So they start to get a little confusing. So instead of drawing a picture, every time you want to say which analytic continuation you're doing, they capture all of this with an i-epsilon prescription. So let me write that down. So the i-epsilon prescription says that O of tx, O of 0, is gE of i times t minus i-epsilon, and O of 0, O of tx, is gE t plus i-epsilon. As I said, the commutator is the discontinuity of g across the cone. So why is that the same as this picture? We have sort of two choices how we want to say this. We could just draw the cut on the axis, and then I could always tell you which way I went around it. Or, and this is what you do in the i-epsilon prescription, you could say, I'm always going to approach directly along in the straight up direction. So I'm always going to start here and just go straight up, and I'll just move the cut a little bit to the left or a little bit to the right. That's what this is doing. Or you could say that it's saying that I always, I want to just go straight up. One i-epsilon is saying we'll do the continuation straight up this way, and the other one is saying we'll just do it straight up that way. So that's just to capture this choice of path. The two-point function is sort of trivial. In the two-point function it was guaranteed that you'd find this branch cut exactly at the light con. And that is not true anymore in higher point functions. So let me look at an example with four points. Here we won't be able to calculate in detail, but I can draw some pictures and explain why it's different. So say you have point one here, point two here, point three here, point four over here somewhere. I'll draw some light cones. Point four won't really matter, so just leave it there. And imagine that you fix everything except point two, and then we just think of this as a function as an analytically continued of the spacetime point x2. Continued function of that, just like we did before. But now we have all these insertions. So as a complex function of tau two and y two, this is going to have some very intricate series of singularities and cuts. So if you start moving it up this way, then that's going to look pretty similar to what we were doing in the two point function. When you get to this light cone, you'll encounter a branch point, and you have to choose whether you want to go above it or below it. Those will give you the two ways of operating, of ordering these two points in the Lorentzian correlator. That one's guaranteed to work. That's just symmetries. Like if you have Lorentz invariance in the theory, then you're always going to find this branch point at exactly the place that you'd expect from the picture that I drew here. That's just Lorentz invariance. But when you do it again, that's when things get interesting. So now let's take this point and head up towards that light cone. When we head up towards this light cone, if I don't give you any other input, you have no idea where you're going to find that cut. You know that if you go this way, you know where you're going to find it. You're going to find it at just the right spot. But if you first go around one cut and then approach another branch point, you don't know where you're going to find it. And that's where the non-trivial statement of causality comes in. It's the statement that when you start doing these loopy analytic continuations in the correlators, going around some of the cuts and then approaching other singularities, that you better not find any singularities in the wrong place. Like if you head up this way and you find a singularity along this dotted line, then causality is violated. A singularity too soon, causality is violated. A little bit more precisely, the statement in this picture would be that as we approach up this way, as we approach this other light cone, we need correlator to be analytic as V2 approaches V3 from below. So V, this separation here is V2 minus V3. So as V2 is approaching V3, we need that to be analytic. So sometimes people refer to this as the second sheet. The reason they use that lingo is because you're looking at a four-point function and first you go around one cut, and then people say you're now on the second sheet, and now you approach another singularity. So when people talk about correlation functions on the second sheet, this is usually what they're talking about. Questions so far? Yes, okay, the question is, is it because of the presence of the third operator? In this case, it's the presence of the one operator that makes it a subtle question whether the two-three singularity is in the right place. So when we had two-point functions, when we only had two operators, everything was guaranteed by symmetry, but it's the presence of another of other insertions that might mess up, like the one operator might mess up the two-three light cone. The way you can think about this physically is that if you imagine sending a signal from point three, then as long as it doesn't interact with anything, it'll just go on the light cone, but at this point it interacts with whatever you created with operator one, and something funny might happen. You don't want it to jump back in time when it interacts with point one, so that's what's happening. You should be able to do either. It should work for both cases. In this case, no matter how you order... Sorry, is that right? It turns out for one of them it's trivial. For one of them it's trivial that this singularity was in the right place. There's one of them where it's non-trivial. The upshot here is that causality, which we usually think of as a statement about commutators at space-like separation, is more generally thought of as analyticity. It's not... You know, often we talk about analyticity, the analytic S-matrix, that sort of analyticity. This is a different analyticity. It's analyticity in a complexified position space, and causality is the statement... Causality is a statement about where the correlation functions are analytic. Osterweiler Schroeder, a set of theorems called the reconstruction theorems from the 70s, which tell you when this works and when it doesn't. And the answer actually is very simple and something we're sort of familiar with. It's the statement that if you start in rotational invariance and what's called reflection positivity... I'll come back to that, but reflection positivity is related to unitarity. If you start with a theory that has the right symmetries and this reflection positive, then when you go through this procedure of analytically continuing all of the endpoint functions of the theory, you're guaranteed to get something that's both Lorentz invariant and causal. A key thing here is that symmetries alone are not enough. It's very easy to come up with examples of functions which have the correct symmetries in Euclidean, but have the wrong branch point... have the wrong branch points when you try to go to Lorentzian. So it really is this second condition here related to unitarity is really playing an essential role in making sure that the Lorentzian theory is causal. What is reflection positivity? When I first started working on this subject, this was like a... I thought this was something complicated and obscure. Actually, this is just fancy words for something that's sort of obvious and that you're comfortable with. So the basic picture is that in Euclidean signature, where this is the tau direction, a plane of reflection symmetry, so say that tau equals zero plane, this is all higher-dimensional, so this is a plane. Then if I insert operators in some way that's reflection symmetric across the plane and I can insert as many as I want, I can even insert some line operator up here if I can figure out how to reflect it, or I can insert anything I want on the upper half plane and the exact reflection of that stuff on the lower half plane. So for the simplest example, it would just be a two-point function where we insert, say, something here and something here. That has to be, first of all, a real number and, second of all, has to be positive. The plane can be chosen arbitrarily, that's right. Why is, as I said, I started by saying that this is something sort of obvious that you already knew. There's two ways to see this as obvious. One way is to think of this as a quantum state. Okay, so if we imagine preparing, we can think of a path integral on the lower half plane as preparing a quantum state at time zero. And what this picture is calculating in the language of bras and quets is the norm of that quantum state. So the bra here is, or one of them, can remember which is which, one of them is the lower half plane, one is the upper half plane, and the inner product is just when you do the path integral on the plane itself to glue the two together. Another way to see that this is obvious is to just imagine writing down the path integral. And if you do that, you'll see that the path integral that you do on the lower, on the upper half plane is just the complex conjugate of the path integral that you do on the lower half plane. So this is just a path integral mod squared. Well, there's another integral to glue them together, I guess. So reflection positivity is just a fancy way of saying that the Lagrangian is real. Lagrangian is real, then this is just guaranteed. Now, people talk about theories that aren't reflection positive. In STATMEC, there are some famous examples that aren't reflection positive, but that's because in STATMEC, we never care about doing things like this. We never go to Lorentzian signature. STATMEC is really a Euclidean thing. If you're never going to go to Lorentzian signature, then it doesn't matter. You can put i's in your Lagrangian, and that's what happens in these STATMEC examples. But if we're talking about quantum field theory, then we always want to talk about theories that are reflection positive. Yeah, that's right. It's a little more complicated if you have parity violation, but to first order, yes, it's guaranteed if the Lagrangian is real. With the path integral, L is just a number, the Lagrangian density. It's not an operator. It is. Okay, so the question is about transimmons. When you have parity violation, this is more complicated, and the statement of reflection positivity with parity violation requires that you have i's. So in transimmons, I don't quite remember. I think what happens is you need an i and you get an i, so it's still reflection positive. Now, you can choose any plane. For example, if I have a four-point function with four operators inserted like this, well, then it's not going to do me any good to reflect around the tau equals zero plane. So I can't learn anything from that, but I can reflect across that plane, and Euclidean signature, nobody cares which direction is which, so that's also positive. This one is quite interesting from the point of view of the Lorentzian theory, because this is not the norm of anything. When I think of this as in terms of the quantum states of the Lorentzian theory, this is not the norm of anything. It's just some correlation function. So if I draw this in real time, Lorentzian, then since everything it was happening at tau equals zero, I could just directly think of these as points in real Minkowski time. So it's just this correlation function. So this correlation function in the Lorentzian theory has to be positive. That's not obvious when we look at this picture. It doesn't look like the norm. You really have to sort of go to Euclidean, think of it as this reflection across the funny plane, and then go back. You can actually continue, push this a bit further, by taking this picture and time evolving. You can take this picture in Lorentzian. I'm not going to go through this. This gets very confusing. You have to wrap your mind around it in private, but if you time evolve some of these points and then you go back to Euclidean signature and think of that as some operator acting on those points, and then you reflect to the other side and then you go back to this picture and see what happened to these other points, then what happens is that if you evolve these points forward in time, then those points evolve backward in time. This positivity equation still holds as long as things are reflection symmetric in the sense of Rindler space. So this I'll call Rindler Positivity. So Rindler Positivity does, for example, if we have 0.1, call this reflection 0.1 bar, and then 0.2, 1, 0.2, 0.1 bar, 0.2 bar. I think so. No, you can't pick planes in here because this is Lorentzian. You can pick whatever plane you want there. This reflection is actually the one I already drew. It's that straight vertical plane. You have to sort of think through what time evolution does if you hit these operators within e to the i, h, t, and this is what happens. Now, in this case, something interesting has happened here, which is that these points can be time-like separated. So everything is stuck in the Rindler wedges. We can put whatever we want over here, including things that are time-like separated, we could put whatever we want over here and what we put over here has to be the reflection of what's on the right. We're not allowed to put anything in these so-called Milne patches. Since things here are time-like separated, we have to worry about ordering. And the one I wrote here is the correct one. So for future reference, I'm going to call this correlator B-positive. This is some ordering that gives you positive numbers. If we were to do a different ordering, then it wouldn't be positive anymore. So the other orderings can be complex, and they are complex. They don't need to be positive, but they're bounded. Like in this case, if we were to calculate the time-ordered correlator, so the time-ordered correlator in this case would be 2 bar, 1 bar, 1, 2, then this inequality says that this is less than or equal to that positive-ordered correlator, 1, 2, 1 bar, 2 bar. I'll give you the full derivation of this statement, but I'll give you the flavor where it came from. So let's see. There's a couple ways to think about this. The most direct way is to basically apply this statement to linear combinations of 0, 1, 0, 2 plus numbers times 0, 2, 0, 1. Since we're doing quantum mechanics here, everything is linear. Not only are all these positivity statements true about individual operator insertions, they're also true about linear combinations of operators. And by choosing a clever linear combination of 0, 1s and 0, 2s, you can derive bounds like this. Another way of thinking of this is that this is the Cauchy-Schwarz inequality. You have positive inner product. These Rindler reflections define for you a positive inner product, and when you have a positive inner product, you get Cauchy-Schwarz inequalities, and you can think of this as the dot product of two vectors in that language. Okay, so this was all just completely general. In fact, let me think. I don't think I even assumed conformal invariance so far today. No, other than this example where I talked about the two-point function, I haven't even assumed conformal invariance yet today. Everything I've said was just completely general about unitary quantum field theories, yeah. There's no, none of these constraints apply to an odd number of insertions. None of them apply. By the way, this Rindler positivity, as far as I can tell, was first discussed by Cassini in recent, at least in the recent literature, and it was first discussed by Cassini in the context of entanglement, Rennie entropies, and then it was revived in other literature a couple years later. Okay, so that was all just general, and now I want to go back to the Aboriginal energy condition and give this final proof. So I said that the first proof that we're going to do is the Annex from causality. Now that we understand what causality is, hopefully we can do this pretty quick because we only have 10 minutes to do it. So back to the Annex. Let me start with a very brief toy version of the argument. Yeah, the question is whether it's related to cluster decomposition. I don't think so. No, cluster decomposition is like locality. I don't think there's any relation between the two. Cluster decomposition does show up as an additional assumption in the Austro-Auto Schrader theorem. I just didn't mention it because we usually... Okay, so the toy version is going to be sort of rough, but it's going to give the right intuition. So let's just... I'm going to forget about most of the complicated dependence of correlation functions on the position, and suppose you had some kind of theory. It's not a conformal theory anymore. Okay, that's why it's a toy version. But suppose you had some theory where the correlator just had this 1 over v to the delta in it, and we know from these pictures we've been drawing that this v equals 0 singularity encodes the causal structure. v is a null direction, and the fact that you have singular... that you have branch cuts starting at v equals 0 is a statement that you can send signals along a null line. Now, suppose you do something to this theory, and you shift the singularity, and it becomes 1 over v minus epsilon to the delta. Maybe I won't call this epsilon. That looks like the null energy. v minus a... Okay, so this is a toy version. I won't give you a specific reason for this, but I don't know. Suppose you insert some other operators, and the singularity shifts from being a 1 over v to the delta singularity to a 1 over v minus a to the delta singularity. Then, clearly causality requires to be positive. Okay, so if you were to shift the singularity, you better shift it in the positive direction. That way you get a time delay instead of a time advance in this signal that you're trying to send. But suppose we're not powerful enough to calculate the whole correlator, and so we're not able to calculate the whole thing and see that the singularity is shifting. But we only can calculate it in some kind of series expansion in a. So in a series expansion in a, this looks like 1 over v to the delta times 1 plus a delta over v plus dot dot dot. Point being that if we're doing... Say we're doing perturbation theory or OPEs or anything that's an expansion in a small parameter, we're never gonna see shifts in the singularity. A shift in a singularity is a big thing. You can't see it in a small expansion. But we might see stuff like this. We might see these kinds of corrections. And the intuition behind the ANAC is that when you see corrections like this, that they come with sign constraints because they're related to the fact that the poles, the singularities of the greens function are trying to shift. Okay, so if you calculate some correlation function and you see a term like this, then you should be suspicious about this term and think that it probably comes with a sign constraint. This is exactly where the ANAC is gonna come from. What's special about this term, since it's trying to shift the singularity, what's special about it is that although it's small, this is plus, we're expanding an A, so this is the small correction, but it's growing as we approach the light cone because it's trying to shift that singularity. So the words to have in mind are that small but growing corrections have to come with any old sign. Small but growing corrections have to come with a sign constraint that's dictated by causality. So that's just the intuition that tells you that some theorem along these lines should be possible to actually get the ANAC. So here's the real version of the argument. I'm gonna look at a four-point function. Okay, so O is just some operator. It could even have spin. You can think of as a probe operator. So these are just words. You can think of it however you like, but I think of O as creating some state and then psi is probing that state. So by moving these psi's very far away, you can sort of imagine that I'm trying to send a signal from this psi to that psi. Then we can define that correlator, which is O of one psi of uv psi of minus u minus v O of minus one, and I'll normalize by the two-point functions. At the same points. Sort of, sort of. Yes, I think of it that way. It's not quite literally true because this is not a normalizable state. I don't have time to get into that, but I can explain afterward how to think about that. Okay, so I need to relabel coordinates slightly. I'm gonna call the invariant separation eta and then all u one over sigma. Now you can see that I've set this up so that we can apply all of the technology that I've been talking about. First of all, we're in a limit where we can use our OPE formula from yesterday on the psi insertions. And secondly, things are reflection symmetric about the Rindler horizon. So these positivity conditions come into play. And third, I'm gonna be talking about analyticity in the u, in complexified u and v because we said that causality is a statement about where that function is analytic. So I'm not gonna give all the coordinate changes and everything. Just wanna give you the flavor of the rest of the argument. The statement of causality can be used to show that this function is an analytic function of sigma. Sigma is one over u. So it's an analytic function of one of these coordinates in the upper half sigma plane. So here's some complexified... It's hard to think about this in space time because all the coordinates have been complexified but there's some complexified coordinate where causality tells you it's an analytic function in the upper half plane. And what do you do when you see an analytic function when you integrate it around closed contours? Because that seems like a good thing to do an analytic function. And when you take an analytic function and you integrate it over a closed contour, you have to get zero. Maybe instead of trying to rush the last five minutes, I'll just spend the first five minutes on this tomorrow and then we'll move on to the other arguments. So we'll stop here for today.