 So you remember the last class, we derived the Schwingler-Eisen equations, the equations of motion under the presentation, and as a trivial use of this, we computed the propagator, the two-point function of pi x with pi y in a free-scaler theory, and we ended up with, in momentum space, the propagated view came to be minus i by k squared plus n squared. And we ended the last class by saying this would be very good, except it was ambiguous because k squared plus n squared goes to zero on the shell, so this propagator was ambiguous because we needed to give it a better definition. And part of the purpose of what we're trying to do now is to approach to words that are giving better definition of navigation. Fine, let's do that. So part of the purpose of the next 15 or 20 minutes is to give this definition of the first model. Wait a minute, okay. We can go back to the path in time. You can remember that we are basically discussing the path in time, but starting with the observation that it can find by h t. Let me write that as dx, bar. Okay. And now we'll do this very often. Perhaps it's worth taking a minute to step back and ask, is this formula that we have defined very defined? Is this formula that we can now make defined? So if we take it up, is the path integral of the right-hand side very defined? We might be a little nervous because, you know, if a mathematician wants a well-defined path at the integral of any path, he wants the integral at the integral is 0 to infinity. He wants the integral to die off at infinity. That's a basic requirement for convergence. We've got a path integral is a lot of integrals. An integral for every x of t. And you could ask, does the integral to die off at infinity? And the answer is no, because the modulus of the integral is 1. Take it at your face. Now, as all of you know, this need not make this, need not in any physical sense make this path integral if defined. Because even integrals that have constant modulus can integrate something very rapidly vanishing because very rapid cancellation of phases. And that's the idea that this path integral makes sense even though it doesn't, the integral doesn't manage a large values because of rapid cancellation of phases. However, this is a very delicate thing. And it's a bit delicate to try to, for instance, use, in the last class we used the integral of a total derivative was 0. Is this really true when the integral doesn't die off at infinity? It just oscillates. It's a bit unclear. Yes. The rapid cancellation of phases only occurs in the classical integral. No, it will happen when you go to very large, it happens in the classical level. But in any situation where different parts of the integral have a rapid change in phase without significant change in magnitude. Yeah. So, in particular, it will also happen very rapidly across a very large value of field space. Yeah. Questions like this perhaps suggest it would be nicer to have a cleaner definition of the path integral. One in which we can more, it will create a confidence that we use, the manipulations that we're used to in the integral and sort of have great confidence in the convergence. So, what we now use to find a sort of better definition of this object. So, we are not going to provide what in some sense is the real definition. Okay, mathematically well defined to the path integral. The better defined version of the path integral. Okay. And the optimization goes as far as, look, what we did to get this path integral, what we did was to take the evolution operator and slice it up a little bit. And after each slicing, we inserted a complete set of states. Now, we worked with the, with the precision items. So, we could have worked with any basis. For instance, we could have worked with an energy, an item basis. And then we would have had all the energies from the lower bound. All our systems would be such that there's a lower bound in energy. Up to infinity. Most of our systems would be such that there's no bottom in the energy of the upper bound. Okay. And the expectation is that the very rapid cancellation in the very high energy states makes, makes the contribution of infinity well defined. But this is another way of saying what's valid. However, suppose we instead consider the following object, e to the power minus t distribution. In this object, in this object the cancellation, the fact that very high energy states and intermediate states don't contribute is manifest. Okay. The fact that very high energy states and the intermediate states don't contribute is manifest. Because such states are exponentially net. Okay. Now, more particularly what we could have done was to take, what was important is that there was a negative real part to the city. This object e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus t e to the power minus is equal to i times e to the power which means we are interested in going to this part, we are interested in going to that part of the complex plane in the definition of this function. We can do is to take this object, define it by this and then define the object of real physical interest by analytically continuing to Euclidean position. As we said, if we go exactly to this axis, we are not into the same immediate definition. So, what is the way in which we define our actual object is T Euclidean is equal to e to the power i pi by e to the power of the actual time, where we take the limit epsilon versus e to the power but we may need to keep epsilon in various expressions. This is our definition, T E, this is what we are going to try to do, we are going to try to define an auxiliary object, regarding this analytic function in T Euclidean and then obtain the object that we are interested in by analytic continuation. Pretty simple, pretty clear procedure. The first thing we are going to do is to go back to quantum mechanics and try to compute this object and see if it pans out, then it gives us a better definition. So, we are just going to repeat the kind of calculations that we have to perform. As you know, once we compute the infinitesimal evolution, then the rest is just follows. So, let us just compute e to the power minus delta t times h between x, x, n plus 1 and x z, this is the important part of the calculation. We know it exactly like we did before. So, we take x, n plus 1 and then we write this as, let us work with it, simple case, generalize it. e to the power, now in the Hamiltonian, our Hamiltonian is equal to, do you remember, v of x plus b squared by u m, so we get e to the power delta t v of x minus e to the power minus delta t v squared by u m v. So, this becomes e to the power, you remember, we write this as product because of the infinitesimulation. This is usually becomes e to the power minus delta t v of x, n plus 1. This is usually becomes e to the power minus delta t v squared by u m. And this becomes e to the power i v of x, n plus 1 minus x. Now, the integral of the overview is trivial to perform. So, now see the integral, so we do a bit completely square. So, what do we get? Well, let us see. So, this is e to the power minus of delta t by, so minus i, so minus i upon this delta x. That is one. Then I will get m divided by delta t, the whole thing squared. Just check that I got the cross term right, the cross term is minus 2 i with the minus sign. So, plus i, then p delta x and then m and delta x. So, that works. And then of course, we do this and add it in this new term. There was not actually there. It is this thing squared. This thing squared is minus, a plus, so we need a minus. So, minus delta x by m by delta t, the whole thing squared into delta t by 2. Integral over this is some number of the minus. What is this quantity? So, this is integral to p e to the power minus delta t into v of x and plus y. This contribution and this contribution here delta t by delta t is x dot. There is a overall minus delta t squared delta t, so there is an overall delta t. m plus m x dot squared by 2. And so, in the end what is left behind is the path integral. The x e to the power minus what is called x euclidean. Just definition. Where is euclidean? s euclidean is equal to v of x plus m x dot squared. Absolutely clear that this is a much better defined path integral than the other one. Why is that? Because s euclidean is positive. Or since we don't know whether v is positive or not. What I mean is that the contribution from infinity and s euclidean is highly suppressed. If the x goes to infinity we assume that v is either bounded or goes to infinity. And x dot squared is clear first. So, we are getting exponential suppression of crazy big forms. So, x going to infinity is going to be highly suppressed. Not by some delicate phase calculation, but just by the fact that the integral will match. So, let's take a look at this euclidean action once again. Let's take a look at this euclidean action once again. So, this euclidean action here was as we saw this object. And let's see if there is a simple way of obtaining it from the Mikavsky. That is of course, what we are saying about the path integral is i times s Mikavsky. Which is equal to i times minus m. This action is real, and this is a property. You remember we have this change, we have this relationship. t euclidean is equal to i times d Mikavsky. So, t euclidean was the t that appeared in the definition of this strange function. And d Mikavsky is the real action. We have i times and the equivalent minus s euclidean which was out there as the euclidean path integral. It was equal to minus of m x dot square by 2 plus 3 of x. Put this relationship in euclidean. I take the expression in the euclidean expression and I insert d euclidean is equal to minus i times d Mikavsky. What do I get? Firstly, I get a minus i from the major graph. But I've also become an extra minus sign because the d by d t picks up an extra minus sign. It's a bicycle. So, this becomes equal to i times m x dot square by 2 minus d of x. If I just make a substitution, t euclidean is equal to i times d euclidean. It's quite satisfying. The expression for the euclidean path integral is just a naive substitution of the expression for the euclidean path integral. Excellent. So, any questions or comments about this? Let me know. We're just completely analyzing the title here. Yes, sir. Basically what the answer is that just doing this mathematical trick makes the action the entire positive and the well-dependent. Well, we're physically involved because we know that that object is the compute. And we know that this object will not receive contributions from intermediate states at the time. You know, what we're doing in a path integral is breaking this up into small bits and summing over the contribution of all the intermediate states to these elements. We've arranged it. We've written it so that these operate right here will not receive significant contributions from intermediate states by their energy. Is this clear? No, it is. Then if we've done the other one here, if we try to compute, if we try to compute e to the power 3, okay, then we'll look at the output of that computing continuation. We'll have given us this path integral without the minus. And that would have been a terribly complicated. The combination of these configurations in infinity would manifest even though. Okay? We're just telling you that if you try to find the positive expression of this object with the positive, the path integral expression of this object with the positive and then try to, you would succeed. It's too divergent to operate. To admit the path integral. Okay? The next one is good. Is this clear? Yes. Is it similar to the problem that we had? It's going to be the same. We'll see. So it's used for avoiding the forms of the axis. Okay, exactly. We'll see this in just a minute. Off-shell wide? Mass. What do you mean off-shell, Mass? I said k squared is more equal to m squared. For the propagator? The last ten minutes, it has nothing to do with the propagator. We're going to get back to the propagator. Okay? Yes. Sir, the point of this is going to be avoiding the pole of the mass. We're going to add an i. This will add an i as well. Let's do that. Four questions. Yes. What we saw earlier is that if you do a given continuation of the sign, that is, you are going to give it the action itself. Yes. But from now on, you can just give it an analytically continuum action itself and whatever. We can, though. It's useful all the way. I mean, sometimes it's useful to go back to first minutes. You know, you can ask. For instance, what exactly is known by analytic continuation of the action? A function you can analytically. What function is the analytic continuum? His function. But what do we do by this? You know, it makes sense to talk about it. That's a well-defined mathematical. The other thing is a bit... That's not what it means. It's not that clear. Okay? We've seen as an observation that there's a trick to obtaining this. I prefer to word it and say, what we're doing is analytically continuum of this object. And then you get what you need. That's a clear statement. I think that's sort of a mathematical question. Yes. Is it always possible to analytically continuum something? I mean, in Kowski's access to the union axis, I mean, there can be branched out between the separate center, right? So, what do we talk about? Ah. So, in what we did here... In what we did here, think that there will... Okay. That's a good question. I think that we have sort of guaranteed that there will not... at least find the diagram perturbation. Okay? This analytic continuum is guaranteed never to have such... such significance. That... That makes very quiet words when we come to that. The point is, let me get... You know, you've got some... You've got some integral and perturbation here. And the only place where the integral is manifestly an analytic function of all external variables, except when the integral hits some singularity of the problem. Okay? And this analytic continuation is arranged... I will explain this much more detail. Because analytic continuation is arranged such that when they propagate this with the right ions, the analytic continuation can proceed without ever encountering the same. Any final diagram perturbation theory at least... Well, we'll prove this. Okay? I think it's more general statement, but at least the same pricing. Ah. Yes, please go. Could you meet me, sir, for a little further further? I think as long as we... The potential there does not have any non-analytic dependence on these things. We can always say that it... This is probably true here. It's probably true here. Yeah. Like, if you would, in this way, this is manifestly an analytic function of p. That... You know, that just is an analytic function of p. Why? Okay? See, some of us have got some of... Let me take the matrix elements of this between two energy elements states. That is clearly an analytic function. And then the matrix elements between any two other states is the matrix elements between energy items states times the inner products of that with the empty elements states. Now, the only possible way that could lose analytic dissipation would be because of the sum of exponentials. Is it that sum language? But it can't do that because of the exponential left. Unless you deal with two craziest things. Where the state has more and more overlap with higher and higher. The overlap with higher energy states grows exponentially. That would not be an omniscient state. Right? So, I think it's quite clear from this point of view that it will be. If there are similarities in the way in vx, nothing in potential that you could avoid any sort of non-amplicity but you could also have branch, that's not... No, no, no, wait, wait, wait, wait, wait, wait. I don't say anything about the derivatives. All we were required was this dysfunction. All we required was that this function have a reasonable spectrum. Let's operate it as a reasonable spectrum. That was all that was needed for my output. We didn't talk about branch guards, poles, anything like that. This is a slightly abstract argument. We flash it out in propagation. Okay, I don't... At this level, it's just clear that there's an amplitude. Is that true? Okay. Okay, now since there are many questions of the propagator, let's get back to the propagator before we move on to whatever we need to say next. So now, applying the same technique, applying the same technique to... applying the same technique to a scale of truth theory, just by identifying more the kinetic term as more the potential term is, you easily say that if S Minkowski was equal to minus 10 mu 5 and mu 5 mu 2, let's say minus M squared by squared minus v of 5, the equivalent of 5 to the power of f. That's better. If somebody tells me about the utility, that should be... which is necessary for theory to be... not bounded from below, it's necessary for theory to make sense. You know, first as in the quantum mechanics case, you know it's my little math problem. Okay, let us repeat a computation of the propagator things up. Okay, let us repeat a computation of the propagator. Okay, let us repeat a computation of the propagator for this path. I'm doing it again just to give you practice with... you know the answer, right? It's one of the case questions. But actually, I'm doing it again just to give you practice with manipulating path. So what we're going to do, and this is now a legitimate operation. At least if there's a regular term. That is totally legitimate. You've got d5, you take d by d5 x, e to the power minus x. So this pi is a pi. The second object has... so delta of x minus y. That's one term in this case. And... time expectation value. The second object is what it's with. This is this. Let's set this to 0. So plus... minus del squared plus m squared pi of x. Propagated by the expectation value of 1. Let's call it that g of x, y of g of x, y. x, y. Obviously, the equation of minus del squared of del squared minus m squared of this is equal to... delta of x. What? What? This is the standardized equation. And... I'm writing this with sine. Sine and with sine because of the... minus and with sine. Somebody should copy that. You see, right? The m squared of this comes with the minus. So this is... minus del squared of x. Now we go to the momentum space. And this gives us the g of p is equal to 1 over 3 squared. Let me know. So the Euclidean propagator is simply 1 over 3 squared. Now, I understand how I can deny the Minkowski propagator from this expression. Now, notice, by the way, the Euclidean action has invariance under XO4. It's not an XO3 formula. Okay? Notice also, by the way, that the strength of computing a Euclidean action makes this... makes the bottleneck... look very much like statistical physics. Because what you're doing is summing over all the configurations of what looks like a spin system, weighted by some measure which you can think of as... which you can think of as... like the Hamiltonian, sometimes. We'll come back to this analogy as we proceed. But the fact... fact. So now we've got this propagator and we want to understand how, given that we have the propagator in Euclidean space, how we can compute the propagator in Minkowski? We do want to do the analytic continuation. Okay. Something I didn't say, but let me say now, is that it is natural to define not just the transition amplitude, the analytic continuation, not just of the transition amplitude, but of all operated insertions. So, suppose we define e to the bar minus, suppose we're defining the following object. So today on the side, let me define e to the bar minus d1 minus dmi h times x. And we'll go back to what we get. e to the bar minus d2 minus d1 h times x. And so on. Exactly like we did for defining operator is time-ordered, with a time-ordered operator. Except we use e to the bar minus instead of e to the bar minus i. Okay. Let's define this object here. Let's define this object here as the analytic continuation. Let's define this object as expectation value that you can detect. This function, I'll give it a name. g of d1, d2, some object with any insertions. Okay. It also depends on d, i, t, m. And it also depends on the states. And the states depends on the states. Okay. Without any insertions, the corresponding object was obtained by just setting ti is equal to i times t with Kowski i, tf is equal to i times t with Kowski i. Okay. We'll do the same analytic continuation to all the electrical simultaneously. If we do that, you see, we're then computing what we actually want. Because these will become the actual energy operation and set up with these states. And then we'll give us the actual time-order time-order operator product insertions we want. Okay. What sense does time-order mean when it has time and space? Because it is at equal value to space and now if I perform a two-second athlete, it's true of any one way that I am performing. Because just, it is true for the space coordinates. It is equivalent to space. And what, firstly what we're doing is defining the problem. Okay. Firstly what we're doing is we're trying to define the object which will analytically continue to the time-order operator product in many contexts. Okay. Do you agree that if I define this, if this is my object, and then on this object, I perform this analytic, which always, even when I write an i, i is always the power i by 2 minus 10 seconds. All of this. Okay. Do you agree that I may get back to the quantity I want? Yes. Okay. Now let's get back to your question. In quantum mechanics, of course, it was completely weathered by. There is a line, and I order that line. I time-order it on the line. Now, that's good. So, your question is that if I have a Euclidean quantum field theory, and I have different ways of slicing the integral, in fact, that's true, and it does work out. Say, one example of this that you may be familiar with is the difference between radial quantization and slice quantization. In radial quantization, the time-ordering of operators depends on which order and radius you're at. Okay. In slice quantization, it depends on where you are in the slice. So, the different notions use different Hamiltonians, different notions of ordering, but since they're both governed by the same integral expression, those two things will, in the end, turn out to be this. Okay. Yeah. Okay. Okay. Okay. I need to think a little bit more about your question. Okay. Okay. There's maybe something clear to say. Just a moment. I have something better to say about your question. There's something more than what I've said, but let me not try to think on the field. I'll be right back. Okay. But let's get back to this problem here. Okay. Let's go back to quantum mechanics. Do you see that if I take this object, this object, which is defined by these insertions, okay, and I continue the dice, I define this object, and I continue the dice in this way, then I get back to the object I want to complete. Okay. So, there is a natural object. There is a natural object to insert within the ingredient part of it, which is this object. There's a natural object to define within the ingredient part of it, which is that object. Okay. Okay. And with that operator insertion, with this insertion here, okay, with this insertion here, okay, now you can ask, why is the path integral expression for the computation of this object? That's how we have a suggestion. So, this object in Euclidean space is full. What path integral completes it? It's Euclidean, right? We know what the path integral is for completing this object. It's the path integral of the Euclidean action. And then, we have to do this step. But this step will insert an extra x. So, basically two points. Yeah, this is, yeah. This object here is computed by the Euclidean path integral, dx e to the power minus s, e to the power minus x, times x of t1, x of t, this is this object, only because it's analytic continuation. It gives us the object we want it to have. It's analytic continuation. Simultaneous analytic continuation. simultaneous analytic continuation in all times. Is it clear? But now we see something quite nice, that the object whose simultaneous analytic continuation in all times gives us what we want, is itself computed by very nice expression. If we want to compute operator insertions inside the mid-class key part of the table, it's often both best to define that actually often easier because you don't have to keep that factor so high. They're actually difficult to do in the calculation, and they do the analytic continuation. Is this clear? The moment. It's a little more than you can try to find. The moment is a mathematical trick but very useful method. Now let us employ this mathematical trick to the computation of the propagator. Let us employ this mathematical trick to the computation of the propagator of the scalar function. Now let us actually compute 5 of x, 5 of y in the mid-class degree. Instead, what the script is telling us is that we should just do this calculation. Can you make 5 of x, 5 of y in the Euclidean theory, and then analytic continuation. Now, we already know what 5 of x, 5 of y in the Euclidean theory because we just don't do it. In the moment, it faces whatever we spell plus. So let's write that down. Yeah, I'll go through this once very slowly. After this, we will, next time we do an analytic continuation, we just do it. We just, once, just to get the logic. The logic is a bit slippery. Okay. So what have we got? We've got the, let's look at the logic. T1 x, 5 of x, x of y, x Euclidean, y. The spatial parts of x of y are the same as the spatial parts of d because we have the tiny parts of d. This is equal to integral d dk over 2 pi d into the power i p k, k dot. Do you understand the formula? You know, the ingredient, I mean the momentum space correlator was 5 of p1, 5 minus p2 is 1 over p1 squared plus m squared times 2 pi to the d delta of p1 minus p2. Okay. And I put that plus value to the definition of the Fourier transform, the delta function, but make sure it's function of d of x, y and that's the remaining thing that we will do. Is this clear? These are things you know when you want to, we won't unless it's a question. Okay. So this is the answer that you can get now. Now, what are we supposed to do to get the answer, the object that we're really interested in? Okay. Our rule was, now let us forget about the x, spatial dependence. There's a dot product at 7 over k dot x minus e, which is the Euclidean one. Right. k is just a Fourier transform. This is just a Fourier transform. I've got a function of x and y, and it's the Fourier, I'm just Fourier transform. So it's k is 0, x is 0 plus k1, x1 is k2, x. It's Fourier transform. Formal Fourier, I mean. Point which I'm trying to focus on is k is equal to 0, x would have been dying, which would be a euclidean x. 0, p would have been k. What is p? k would have been k. Hang on, we're going to go through this. Yes. Yes, yes, yes. This is just an integral expression for the computer function given by this integral. Like no confusion of that. Now, what we want to do is to analytically continue this function. That's a clear procedure. It's a function to define my integral, we have to analytically continue. So what we want to do now, now I'm going to focus here, there are the space dependencies and the time dependencies. All the action is acting in time. I'm going to focus on time. So I'm just going to write this as x0, euclidean, y0, euclidean, and ignore the other dependencies. Okay, because they all just go along. Just so that you focus on the form. What are we supposed to do? We're supposed to take x0 increase and rotate into the power i by 2 minus alpha minus epsilon times x0 and that's it. Similarly, we're supposed to substitute this into the same expression. So we can compute that g, the real difference function in the middle of the space, x0, y0, where x0 and y0 are the real life in the middle of the space. Just math by mathematics. Okay, it's equal to dv, j by 2, 5, 8, 9, e to the power i, and here also I did not ignore the space dependencies. Okay, i k0, which you can give, let's call this x times x0 minus y0 e to the power i pi by 2 minus epsilon. I'm not writing that. Just not to clutter that. We're very interested. And we're also going to not write the dk, I'll just write dk0 in there. All the actions will happen. dk0 e to the power i pi by 2 is given by an integral. The integral goes along the k0, you can give me an expression that gives me the usual Fourier transform, the usual Fourier transform kind of expression as a function of x0 by 0. This is not the usual Fourier kind of transform and kind of expression, because this is an additional fact. But this is what I get, you know, I've stuck to it. So if I have to turn it into something that is more familiar to me, I have to do some valid mathematics to do it. So why is mathematics a method of writing? You see, what I'm going to try to do with the model, what I'm going to try to do is to take this contour and use Cauchy's theorem. You turn it into the following problem. Okay, for good. What I'm going to try to claim is that Cauchy's theorem tells me that this contour integral and that the integral over these arcs, but firstly, the integral over these arcs. So if I cut an integral like this, I can deform an integral like this down and like that. Provided there are no singularities inside. First let's check, are there singularities inside here? This function has singularities only on the complex, nothing on the imaginary. Since in my manipulation, this contour, which is this way, I'm turning it like this, not quite going to the imaginary. That's the point of this episode. Going through here and like this, I will never hit a singularity. Let me say this again if it was not. I'll say it again, it's important. I wanted to do an integral like this. I wanted to do an integral like this. What I do, what I'm going to do is to change the integration contour. I'm going to change the integration contour by using Cauchy's theorem. I can do that provided I don't hit a singularity on the mind. So this integration contour can be continuously conformed to this integration contour, plus these arcs here. So I start with this and I deform it to 2, 2, 2, 2, 2. And I can do that provided I don't encounter a singularity. Now all I'm doing is to say that inside this closed loop, the integral is zero by Cauchy's theorem. Is this clear? Sir, are these arcs in terms of the fact that they need to be high, high by 2 personas. Is that this kind of a circle kind of thing? So is that the reason why we have taken a circular arc or we could have taken it? We could have taken it. This is something, you know, an infinity. Now we want to check that the things at infinity match. Okay, so now for one sign of this, that's the logic for that equation. First let's work on the algebra and then we can answer that. Let's see. Let's assume first that this is positive. Okay, x here minus y here is positive. Okay, then let's look here. This guy here is basically i. So if this was a real number, this is actually i. That's i, which is minus. Okay, as we change this, this, we will change the real part. But can you see that the, that the, that the real part is always going to be, always going to be negative. Let me do the following. Let me, let me, let me, let me, let me get back to the next class about why we get, why these contours can be, because I've made an argument that there's an element I don't believe in. I fooled myself. I'll get back to you next class. Okay, about, I'll give you a proper argument for why these contours. They can be, no, there's a good argument I just got. Suppose that were the x. Okay, suppose that were the x, then by rotating to this contour here, then by rotating to this contour here. Okay, we are now exactly killed. This e to the power i pi, because we're rotated by e to the power minus i pi. So this exactly kills that. And now we have this, the following expression. Now we have the following expression we have, okay, and so gm of zero is equal to, what is it? Firstly, after we do the rotation, we've got a substitute in k zero e to the power n is equal to, now it's the inverse candidate. So minus i e to the pi k dot x times y divided by, and what have we got here? We get what was k zero squared. It was k zero squared, but this k zero now has become the inverse of this. So e to the power minus i pi minus two epsilon squared plus k i squared plus x squared. You see this minus i being from the measure factor, and this rotation is, the fact that we rotate to this contour here. I'll get back to you about this thing, okay? We don't want to step. You can ask what you mean. Now what happens to this k dot? So e to the power minus i is minus one. So this is now the effect of Fourier transform. So we get one over minus k zero squared times e to the power i two epsilon. Two epsilon. Epsilon to epsilon. Epsilon is just a positive number. It's a small positive number, okay? Plus k i squared plus x squared. Now we can expand this e to the pi epsilon. So this is one over minus k zero squared plus k i squared plus x squared minus i times some number. Epsilon prime was epsilon into k zero squared. The only thing, the important thing about epsilon is it's in sign. This became minus from the plus because I expanded. This is one plus i epsilon, and then it was minus. Is this clear? So this gives you the rule. This analytic continuation gives you this rule. It basically tells you that once you can justify and neglect the contours and infinity, okay? It tells you that in the Fourier transform, in the Fourier transform, what you have to do is to do the reverse. So we have e e to the power i by two epsilon times three m. The net result of this manipulation is that if you've got a Green's function in euclidean space, you've got it as a function of k euclidean. This is just the Fourier mode of the euclidean. What you do to get the Mankowski function is the reverse rotation. So e to k e is equal to e to the power i pi by minus i pi by two plus minus pi by two. It tells you that all these polar particles which are deltas and deltas, k zero is like e to the minus i theta, where theta would be less than pi by two minus epsilon. So it would be like e to the i theta where theta is like in the first quadrant, plus theta plus i sin theta, e to the i e to the y sin theta, e to the minus sin theta, sin theta would be positive deltas, k over point b would be negative deltas. This is something that is going on. However, let me not endorse it completely. I'll get back to you next class about why this one. Why this one? Let me hold on to that. I want to go tell you an element of that. It's okay. So after this big machine I owe you, after this just to make the contour neglect, which I owe you, we come to the following conclusion that what we should get and where we have to is that we also have this factor of minus i, that is from the measure of the content. So now this here, so in the correct, in a real mechanism, in Kowski's case, Fourier phase is a complicated minus i divided by K squared plus M squared. This is exactly what we got by direct computation. Except we didn't have that minus i. So the minus i epsilon is the result of working with a well-defined path detector, one that you obtain in the Euclidean space and then analytically continue. So here we are rotating it, we are closing it also in this direction. And we close it in the other direction, they still wouldn't have more to bring because when the rotation product will like Km, Ke going to be to the i by going to minus i. Yes, absolutely. It's very important which direction we do the analytic. So it depends on that because that is also very mathematical. No, no, no, no. Not that, yes. So which one did we have to do? We had to do the rotation that cancels the analytic continuation of time. What chose the analytic continuation of time? The thing was chosen so that we wanted e to the power i h to become, correspond to the power minus h to u. If we did the opposite one, we would have had plus here. It's guided by the basic principle, that the well defined path integral is that for e to the power minus t times h. This is also actually by the way, I mean the resolution of this contour thing because this object is only well defined if t is positive. So I would get confused about what chose the sine wave 0 minus 0. This is going to be the heart in that contour like that. This is going to be the heart. But let me think it through again. This object is well defined for positive t and ill-defined for negative t. That will become some sine of h 0 minus 0. But let me say it properly. So what are we going to do with this particular analytic activation? You're assuming t is positive. That's basically the heart. So that's the kind of contour which can choose for the analytic continuation. That's right. This is going to give you the neglect of those terms. But let me say it carefully. This is basically the heart. Exactly. Every evolution step goes forward. You're not going to go backwards. Yeah. But let me say it properly. Okay, great. Any other questions or comments about this? So far in this class, we've understood the analytic continuation of the path of t. And we've understood how this analytic continuation also gives us the i-absorbs of the problem. The usual i-absorbs of the problem. Okay, exercise. And if the canonical, this is something that you've already done in your first course, just to remind yourself, show that it's the canonical way of computing time-ordered, pi of x, pi of y, uses this with design. Now, you get the same answer. If you just use canonical quantization, let's start with the first course. And just compute. Okay? As you all know, it's useful for this purpose to write down an integral expression for the detail. As an integral. Okay? Do that. Sure. But maybe by tomorrow, I'll send you a list of exercises to do. This will be one of them. And that's something you don't mind. I'll come to you and ask you. A list of exercises. So that's what the other thing about. Okay, great. Fine. Any other questions or comments about that? I have a question for you. I mean, how do you sense the partner and the integral function? Yes. I mean, you can have so-called relations as well as the space as well. Choose that. None of them. It will choose that. So your analytic continuation, it's true. Choose our particular size. Nonetheless, it's also true that different expressions that have different Minkowski and interpretation have the same opinion expression. And therefore must be the same. And this question was, how do we understand a priority? That these things shouldn't be the same. Again, I'll think about this and see if I can figure it out. We have to formulate the question properly and then have something to say. I feel there's something more to say about this. Okay. Other questions or comments? Let us move on now with analyzing partner names. So we've understood how to analytically continue. By the way, this analytic continuation has some twists when we deal with fermions and some twists when we deal with agents. Okay. But let me get to that, those twists after I finish saying everything I want to say. Before we go to the complications, let me say the basic things. We'll talk about these twists at the end of this class. The key to punch back on the gates will bring that if you are a well-defined pathet type group, you will have to analytically continue not just where we're doing but also the contour of integration over easy. Everything that has a zero index will have to be continued. We'll see how that works. Okay. But let's postpone that for a moment. Okay. Now I want to continue to analyze formal properties of the pathway. So, in the last class, we analyzed the streamerized equation. In this class, I wanted to try to understand how no-terse theorem works inside pathet type. We've got some field theory. For my freedom, we'll say it's field theory of the scalar fields. But it doesn't matter. We've got some field theory defined by the pathet type d5 e to the power of 0. How does this field theory has a continuous symmetry? What does that mean? That means that both the action and the measure or really all the streamers the product of the e to the power of 0 says that the measure usually works in the way that both the action and the measure and the invariance I'm going to change the variables so that phi is equal to phi tilde plus it's a lot better. I'm using very compact notation. There will be many scalar fields. F could have depended on some position that F could have depended on some position. It's whatever, some functional of all these variables. I'm making a symmetry that's an infinitesimal symmetry. It's a continuous symmetry and the infinitesimal part of it may not be ideal. Suppose under this change of variables when I run this back into the pathet type. The functional form of the pathet type remains unchanged. Which means that under this change of variables if I do this planning again this object remains d phi tilde. Is it equal to d phi tilde e to the power minus s let me emphasize something. In any integral you can always do a change of variables. That change of variables does not change the value of the integral as a number. That's just a property of integral. So for any change of variables this quantity will not change. But in general if you do a change of variables it will change in functional form. If you're doing integral x squared and you do it as a x squared is equal to y squared that will become integral y to the power times the measurement. So it will change the functional form. The answer will change. The functional form will change. But we're never forgetting from making changes of variables in integral but symmetries are those special changes of variables such that it leaves the functional form of the integral in pathet. That is you have the same integrand x in new variables as well as integral. When that happens this is said to be a symmetry of integral. Definition of symmetry. Any questions or comments? Suppose we have a symmetry. Suppose we have a symmetry in our x. I want to deduce the consequence of this for the for the operator insertions etc. In order to deduce this consequence what I'm going to do is is to take this and make the change of variables y is equal to 5 tilde plus epsilon of this change of variables is no longer a symmetry. For instance, an epsilon was like a refacing of a 5 tilde and now made a local refacing of a 5 tilde which is not a symmetry of the action as you very well know unless you also do something with a gauge beam or something. No gauge beam. This is no longer a symmetry of the action. So suppose I call this z z is equal to this object. Now z because under any change of variables the number means it very much. z continues to be given by some part integral b 5 tilde times e to the power minus s tilde of I've just said nothing because s tilde is a different function that I'm not saying. But now let's say what we can say about s tilde. What do we know about s tilde? We know that s tilde reduces to s when epsilon is constant. Therefore it must always be possible to write s tilde as it is a 5 tilde plus integral d over epsilon times sum j d. Because this j d means sum function. You could ask why aren't there more you could ask many things about this. You could ask why is this happening upstairs or downstairs. You could ask why don't you have more terms more derivatives than epsilon? The answer is you could. But if you didn't you could integrate by parts you could do those extra derivatives. You have to do the derivatives. I'm checking. So the change where it is can always be written in this form. z was equal to this but z was also equal to this object thing. One has 5 tilde now the other has 5. But this is an integral of integration. So you see by expanding this to first order you see we conclude an integral d pi tilde e to the power minus s of pi tilde into 1 plus epsilon j mu is equal to integral d pi e to the power minus s of pi. Because these are both z. But the first term this one here is equal to this so we conclude first thing that very good. Yes. What we conclude is so what we have done is of course integrated by parts and what we actually can do is that epsilon that's d mu j mu. Therefore it's true at each point. Because we can take epsilon to be very big take epsilon to be very big very big to be very big. Very big to be very big. The only way this can work same kind of argument we use when we do it when we get equated to a motion theorem of variation. If this is true by multiplying any function that's just better than expected. So we conclude that within the part of the effort expectation value of d mu j mu this is nice but we really want to conclude compute one point functions. Object is just one insertion however we often want to conclude let's redo our variations let's redo our variations in the presence of many insertions. So suppose we had this object here that's o of pi of x1 o1 o hand of pi that got some local operator insertions in the positive. What does local operator mean? It means some functional of pi that depends only on pi at x1 and its variables. Inside this one. So suppose I got this here when I do my change of variables suppose I started with this suppose I started with this here o1 of pi when I do my change of variables when I do my change of variables okay when I do my change of variables this object here this object here will in general pick up will also change so when I write o of pi and I put pi is equal to pi plus delta pi delta pi this will change to some new expression in terms of pi differing in order actually okay so when I rewrite what's 0 do you see what we're getting is integral d mu epsilon x mu times o5 o151 when is that lecture? it's here in 2 minutes so call us then the second term has none of this plus delta o15 x epsilon o2 o1 and so on so when we conclude from this I'm a little bit fast you try to fill it expectation of j mu and mu with o1 to o1 plus delta o1 o2 o1 o1 delta o2 o1 and so on is equal to 0 is the statement that the insertion of the derivative that the divergence of current presently of course this kind the fact that there was an object which 1.1 should manage this is not as clear it's between any states expectation manage j mu j mu m is the quantum of Northus god the existence of symmetry implies that the current and the more interesting the fact that in the modern game insertion of derivative with various operators related or equal to sums of changes of those operators under the symmetry object of sort of war identity extremely extremely useful identity in studying all kinds of quantum field theory notice that both Northus theorem see if you write Northus theorem plus this war identity in about 5 minutes 2 strips of the black wall you might notice that you know if you go and look at Goldstein's derivation of Northus theorem even in classical physics it's still it's such a simple theorem even it's quantum generalizations or something have the derivation of the war identity in all field theory control god it's this however you have to complicate your life I have a new exercise for you the exercise is use a chronicle of opulation you've probably done this before but I want you to remember and see how these two work together as you will see much easier to get these identities to derive them and get them right in the path of technical representation notice I wanted to explain the connection between this way of thinking the way of thinking of current differentiating with respect to gauge field if you connect to the background source also explain conservation of stress tensor from this point of view we'll go through that in the next class okay next Monday I will not be able to do this class next class I suggest once again next Wednesday Friday is the holiday Friday is the holiday okay so next Wednesday yes please hurry next class next Wednesday