 Okay, you got it, you got it. So yesterday's, in the last class we discussed three things. We first discussed the evolution operation of quantum mechanics in terms of path integrity. We then discussed how the evolution operator and the theory of a scalar field can be written in terms of quantum mechanics. And then in the last class, part of the last class, we discussed how if we wrote down the evolution operator for a putative theory of electromagnetism, in terms of a path integral over the Maxwell action, then we could find an interpretation of the Hilbert space over which it computed a transition angle. Any questions or comments about this? Any questions or comments or discussion? Very last part of the lecture, in place where you were at the stage when taking further projection onto the gated wave and the sensor will be impartial. It was the expression e to the minus i h, which was we discussed earlier in the lecture. Yes, let's go through that again. So, what we need is to study the path integral in the particle mechanics. If, what was I mentioned, if mu nu is f zero i, f zero i, that's the negative slope. And if mu nu is, just remind us and this would be about the logic. What we do is take this path integral and slice it along like this. And then we try to find a Hilbert space interpretation of this. Now, we notice that the path integral could be written as a form f zero i, f zero i, f zero i by two plus minus f i j, f i h j h. This part, okay, and then we decided that we would do the integral in the following, d x zero times integral d a of this action. We had a hard time giving a Hilbert space interpretation to pull out the path integral because a zero did not come with a kinetic term. So, we didn't know how to give a Hilbert space interpretation of the path integral over a zero. Because the one case that we had understood, namely that of quantum mechanics, every path integral came with a kinetic term. So, since that's all we know, we didn't know how to put it with a zero. So, we decided to be pragmatic. What we decided to do is let's do the path integral over the a i first. As far as that path integral goes, it's a standard action. Because f zero a i is, you know, has kinetic terms. That's del zero a i, I don't think so. And then we would do the path integral over a zero later. The first observation was that the path integral over a i would by itself be given a simple Hilbert space interpretation. Okay? Just because a i is all completely standard, we get the same interpretation as we did for the scalar field. In particular, this is a transition amplitude on a space, you know, between on a Hilbert space, where the Hilbert space is wave functionals of three scalar fields. We want to do any three. Three, we're going to do three plus one. E minus one. Right? There was a Hilbert space. But this path integral was computing something in that Hilbert space. What was it computing? Okay? What it was computing as always is e to the power minus i h d. So, now what we did was that this e to the power i x, this was computing an evolution of it. There's this here. An evolution operator in the Hilbert space of three fields, built out of a 1, a 2, a 3. Square integral functionals of a 1, a 2, a 3. So, what we want to do, and then this is an integral over some functions. So what we want to do is write them what this was computing. Okay? This was computing just simply e to the power minus i h d. But what was h? Okay? Now, in order to obtain that h, we just follow the usual rules. The usual rules of the path integral tell us that the h is the h that we have from the denominator, that classical procedure. So we just follow the classical rules. Okay? So, h has a term that is plus, minus, so that is f i j f i j by 4 minus f 0 i f 0 i 0 i by 2. This is one of the terms. The other term is q dot p. The q dot is a i dot. p was f 0 i. The same thing as del 0 a i. And this term, this term may be similar. So to make it properly similar, I added and subtracted 10 i d 0 times f 0 i plus 10 i d 0. The example that this guy is f 0 i itself, and f 0 i is simply 5. So since I want to write down an operator, I write it down in an operator language. I write it down as a function of a is a 5. A 5 is a function of y. So I've identified my h. Okay? So h is equal to f i j f i j by 4 plus y i squared by 2 plus del i d 0. Is this clear? Okay. So then what's happening is that what we're doing is computing the transition amplitude in an Hilbert space field. With the Hamiltonian in the field, of course the problem depends on a 0. So we've got to get down to the path detector of the a 0. As far as this auxiliary problem between the a i path detector was concerned, a 0 is summed back around 3. Now it's nice that we solved the auxiliary problem, because that's not what we started out trying to do. We started out trying to define the path detector interpretation of the full path detector, including the path detector of the a 0. Okay? So now what we're going to do is in addition the path detector over a 0. Notice that a 0 acts at every value of space as well as every value of space. Now what I'm going to do is to understand, just let's take the path detector at one particular time over a 0 and try to understand what that is. Okay? That we argued. This path, firstly, we argued that this operator and this operator, they would have reached. So you could write exponential of e to the pi ht as exponential of, you know, just separate the two exponentials, product of two exponentials. Okay? And then we argued that the path detector over a 0 was a projection operator. It acted on wave functions in the following way. Once we act with path detector over a 0 of, so now I'll define the operator p, which is some normalization, times e to the power i del 0 m 0 i, okay? You can take from the a 0. And I think all of this at any given point in space. So I think that the path detector over a 0 will take from the a 0. And I think all of this at any given point in space. So I think it will point at time. Okay? How does this operator i turn given space? Okay? It took psi of a i and gave you instead psi of a i plus del i a 0 integrated over all a 0. What this does, so let me define this action. A i goes to a i plus del i of lambda where lambda as a function just gives space. As the action of what I'm going to call a gauge transformation. Okay? It's clear that this integral protects you on to gauge invariant states. Those states, those psi's that have the property that psi of a i is equal to psi of a i plus b 0, right? That's because d i lambda. How do you prove that? Well, suppose I look at psi of a i plus d i of lambda. Right? This is d i of a 0 plus lambda. And then I do a change of variable that's anything. To rewrite a 0 prime is a 0 plus lambda. So I've got back the same, the same state. Okay? Now of course when we do path detectors there's always some normalization, some overall normalization. Which as you remember we don't care about. Okay? So up to normalizations, what is this operator? It's a projection operator. What's a projection operator? A projection operator in any different space. There's an operator in a radius. It's an operator p such that p squared is equal to p. Why is this a projection operator? That's very simple. It's because what does an operator do? Projects every wave function onto a gage invariant sector. Now suppose you acted once with a projection operator on any wave function. You've got a gage invariant function. If you act again with that same projection operator it doesn't do anything. Because the wave function is on any gage invariant. Of course there could be normalization. Apart from the normalization which we have. Okay? It's a projection operator. You see? So what this tells you is the following. What this tells you is that when we actually do the parking technique. At each time slice, what we actually get is the volume. Once we get it, you break up your time into the bits as we do. And then what you get is e to the power minus i e to the h. Where h now is just this h. Okay? Delta t times, let's say some project. Time to project the time to e to the power minus i h delta t. And project that's all. Because at each time slice, that's a zero power. Now this looks like a lot of projectors. Okay? We've seen many things about these projectors. Firstly you've seen that this projector commutes through family. Because it commutes for any value of a zero. Therefore it commutes through family. Remember a zero is a number. Commute the projectors through all these a zero's. You can commute them either all to the left or all to the right of everyone. That's the movement of all the way. Then we use this magnificent property that the projector is protected. So though we have an incident number of copies of the projector, they all square to each other, up to this normalization. Normalization is sometimes called the volume of the gauge group. Which is of course infinite. And we travel us as we go. Okay? But if we are ignoring the normalization. If we are ignoring normalization. Then it's just the projector. So now what do we see? What we see is that our path integral computes a very simple, a transition amplitude in a very simple Hilbert space. With a very simple Hamilton. Hilbert space is one we discussed. Three. Three square kilometers. Hamilton is one we discussed. However, the path integral that computes becomes equipped automatically. What does the projector want to do? Projectors want to do the subspace of Hilbert space. Okay? That is gauge integral. So the Hilbert space interpretation of our path integral is now clear. Okay? It's a path integral with three AIs. So the Hilbert space may have the Hilbert space of three scalar fields. Subject to the one constraint. And to the other thing. But subject to the constraint that we would only look at that sector of the Hilbert space. That is, that obeys the condition of the Hilbert space. Now you can ask, is this consistent? You know, if you put any old projection onto a Hilbert space, it's not consistent with the Hilbert space. Because if you start with some projection. Let's say I choose to, I'm doing quantum mechanics. And I say, look, there's this region in space. And I have a project to that subset of Hilbert space, such that the particle is in this region. That's a good thing to do. To start with, I mean, there's a consistent projector. You can take this operator and you can take it over any volume you want. That's a good thing to do. Because you can easily do this. However, it's not consistent to do danyan x with such a direction. So because if you're dealing with a free particle, if you started with a state that obeys this projection, very soon, in fact, in England, death might die. Every state will not obey the projection. And then you would have to give a definition of it like that. There was a reason for that. The reason for that is that the project with the projection does not come good on the Hamilton. Because if it did come good, of course, e to the power h t goes through the projector. So in our case, we're not imposed on any old projector. We're projected by a projector that does come good. Precisely because Hamilton is by itself the agent there. And so what we've done is consistent. So it's consistent to study not just kind of, but also dynamics, subject to this projection. And what I mean by consistency, I mean, that if we pose this projection at one time, we've automatically imposed the wrong things. We've already seen this in our derivation. This projector here, at each time step, was projecting on the same, constantly projecting back, projecting you back onto your space. But then we saw we didn't need to do that. So it was just projected. Okay? So now we come up with a beautiful interpretation of a beautiful Hilbert space interpretation of Muhammad. Okay? The Hilbert space interpretation is what he said. Simple Hilbert space, simple Hamiltonian, but subject to a projection. If it's not familiar with Hamilton, even then we will have to constantly project it onto the... We would have to constantly project it and then we would have to understand what the effective dynamics of this thing was. Yeah, so it would be sort of like, but you know, it'll never happen. These kind of pathontentals that give you this kind of projector are always equipped with an index. Right? So the problem would be this. It would be a lack of a... What do you say? Because he said, suppose you can state a norm. You've got some way of function here. Press up. And then you project. The norm of your way of function is not resolved. Then you would find it very hard to make things consistent with such an information. Consistent to the problem with the interpretation of Muhammad. Okay? I'm not saying it's impossible. I'm not saying it's a theorem, but you find it very hard. Okay? This does not happen when we state it. Suppose we were doing this one, the mechanics itself, projecting onto some region. Now, suppose our rule was that we evolved over an indefinite time, and just project out that part of the way of function that does not live in our space. Then evolve and project out it. If this was a rule by itself, it doesn't resolve the norm. Because the norm is mod psi squared over all space. If you throw all mod psi over some part of the space, then you try to make this consistent by rescaling psi. But you have a great danger of using the linearity of the mechanics. I mean, this is a very slippery serve. One could try to do this. It's difficult to make it this way. Seems to me that each tradition is included in the Hamiltonian. Aren't we free to choose a each tradition? Don't we have different each traditions? Yeah. Yes. So, as we will see in evaluating this path integral, as we will see as we go along, of course, in evaluating the path integral, we will find ourselves free to insert delta functions that become different each traditions. But there are two separate issues that I want to separate. The first question is this. Suppose we just have the path integral, that's our analysis. Based on this action, with no gauge notation, I am not choosing any gauge notation. I am just doing this path integral. Then you can ask, what is the quantum Hilbert space interpretation of this path integral? I have not mentioned the word gauge in the statement at all. And by following the rules, by quantizing the constant times slices, you are led to this interpretation. Now, there are various manipulations of the path integral that one can do, which are invariances of the path integral. Okay? And we will study these as we go along. But on the other hand, each of these need to be evaluated. Each of these need to be related to the same path integral. So, this path integral is computing this thing. Okay? Now, there are manipulations you can do to compute the path integrals in different ways. As we will understand. So, this is what the field is. Actually, there is one other interesting Hilbert space interpretation. Okay? Then in the interpretation, we will discuss this. But the justification of that will be to show that it is equal to this. This is what the path integral gives you. All the rest is mathematics. This is what you get. This is my way. Now, I have to say that I have not seen a field like textbook that discusses quantization of gauge theories in this way. Perhaps this is because I am missing something, but I think it's because they are all missing something. This is just by far the easiest way. The cleanest way, in my opinion. Understand the quantization of gauge. Let's talk about fixing the gauge. Let's talk about any, you know, all kinds of very artificial things. Sort of like, you know, the unsigned. This is totally, in my opinion. You start and follow the rules that you give. Yeah. This is actually technical. Basically, you actually use a finite variable into a Hamiltonian. How do you incorporate the path integral over my hand into action? Okay, these are two answers to this question. Firstly, you can, right? If you remember our derivation of the path integral in quantum mechanics, at an intermediate stage, what we had was the path integral of a p-q dot minus h of p-q. Okay? This part is that. So firstly, you could do it. But secondly, it's not me. You see, what is the logic involved from here to here? The logic is, I give you a path integral. And I want to know what is the Hilbert space interpretation of this? This step is a path integral. This step is not. Okay? This is just some operator. You ask, what operator in Hilbert's space? And it's that operator. You understand? Having said that, you might find it more convincing if you go through the derivation of the Hilbert integral. You can do it. But it's not me, the four Hilbert integral. Okay? It's not like a way to path integral. The upper life is a path integral. The second line is an evolution of right-hand Hilbert space. One computes the upper path integral, computes the dimensions of this curve. Is this clear? Any other questions about that? Why don't we get a gauge condition at a zero? Why don't we get a gauge condition at a zero? You see, because our Hilbert space did not have a zero. What was a Hilbert space? It was functionals of three a.s. Okay? Because the Hilbert space is square integral wave functionals of a.s. Yeah, there cannot be a conditionals. It's a condition of wave functions. Yes, sir. I would rather say that we're integrating over all conditions of a.s. Right? And then, and we split the configuration space so that into various conditions of a.s. On each condition of a.s. you're getting the protection out of the key events of a.s. So you actually do have a condition of a.s. What do you mean the condition of a.s. Condition of a.s. Okay, maybe there's a way of thinking of it that way. But I've not done that. All of the conditions of a.s. I mean, maybe there's a way of thinking of it that way. But I've not done that. All I did was do the path integral a.s. Okay? Yeah, it's probably true that you can equivalence this to an a.0 equal to zero. Or a.s. It's probably true equivalence. But that's trying to equivalence it with someone else's way of thinking. Okay? From the logic that we are presenting here, this stands on its own. In that logic, our Hilbert space is function of three a.s. And the gauge invariance of that Hilbert space is only a special one. Special of a.s. Gauge consumptions. Okay? Yeah, it's certainly true that what we're doing is very close to what you will find in textbooks on condensation of the a.0 inputs into a.s. Or a.s. Something in a.s. Probably just a.s. But that's not needed for our presentation. Okay? Okay? We come back to this part of the textbook several times over the next two or three lectures as we understand how the language is better and better. I've got it. Actually, I don't know if you want to discuss it later, but I'm worried about the volume of the gauge group, because the way to regularize it would be putting it on an interior lattice, which will also have effect on it. So, how do you regularize it? Well, you know, you could work in a box. You could work in a box. Okay. But you can't? Easy was a real thing. Well, suppose you were interested in transitioning from one type to another. So, you regularize it? Sorry, what are you talking about? You're talking about regulating the volume of the gauge group at each point in the lattice? What are you worried about? At each point in the lattice. At each point in the lattice. Okay. The volume of A0 at each point in the lattice. Okay, so now, as you know, one way of doing this is to do the finite group way of regulating the lattice. Taking the finite group. It's a regulation of the lattice, the section. Okay. You know, you could ask, is there some way of writing a regulated version of the section? That's that in the continuum level. Okay. And that with this lattice gauge theory. In fact, it's what's done by lattice gauge theory. You know, there's a big lattice gauge theory group here. When they compute things on their lattices, they never impose a gauge. They just regularize this in some way. I explain. I've run it on the computer. Okay. So, this kept me down. I mean, he's routinely done. Okay. However, we will not, in general, be particularly interested in, in our class getting all those laptops that we like to sleep with. We will generally be trying to compute spartan technology in some formal way. You know, building a perturbation series. The same what we can. And in that process, the volume of the gauge group is going to hit us. So we will have to deal with it. And we'll get it. Okay. Let me say more. What is going to hit us is that there are flat directions. So let me, it's not really the volume of each one in space, but it's the, let me say more. As all of you know, from your last words, today we have in mind, I should say I'm not trying to make this a systematic introduction to very few theory because you know it. I'm just giving you a perspective based on both the tables. Okay. So, as all of you know, just at the classical level, this action has an invariance. The invariance is the invariant from the amu goes to amu plus d1. And now there's a function space. If you actually start trying to evaluate this integral. Okay. You're going to get, without doing some proper regulation, you're going to get infinity. Because let's break up the path in terms of two parts. Let's break up the path in terms of path integral over all gauge orbits. That is all amu, you know, that any two amus that are related by amu goes to amu plus the amu lambda. I say belong to the same gauge orbit. So let's integrate over all gauge orbits and then over the lambda as in each gauge orbit. Let's do the integral over all gauge orbits last. The integral over lambda as a path integral. But it's a path integral that's just equal to the volume of the space in the function of lambda. Because from the fact that the action does not depend on lambda, we see that the only place that lambda enters in the path integral is indeed an azimuth without any regulation of something. Okay. Now, as we've discussed, we can regulate this away in some way. But if we actually do the calculation for instance, we're trying to do we will find that this means, for instance, this infinite, this flat direction exists even in the quadratic term. And we will find that it means that the action as stated does not allow you to invert the quadratic term because it's flat direction. So we can also have perturbation theory as stated. Okay. Because of that, we might have to deal with this. And this leads us to the kind of things in order to actually do analytic evaluations of this path integral. We're going to kill this volume of the picture. Okay. In a clever, consistent way which we'll talk about perhaps in the next lecture we'll talk about. But I don't know that. Okay. So we will come back to this. This volume of a book will be a headache for us. We will come back to it. Okay. But just to say that there's no principal terrible thing about it. Okay. I have a question for you. Great. So now we know how to do a positive agro-one transition to scalar fields or scalar fields or of the gauge fields. Let me spend two minutes on gauge fields interacting with the field. Okay. So suppose we had a theoretical context here. Let me be consistent. Minus our signature dimension. Okay. Now notice that this action has a symmetry. It has a symmetry under phase under y goes to the fire of a fire and y star goes to the fire of a fire. And this symmetry is non-symmetry of the action. It's only symmetry of the action of alpha is constant. And as all of you know, I'm going to discuss in great detail in previous courses. Okay. Once alpha is a function of position, there's no longer symmetry of the action. It can be promoted to a symmetry of the action. In fact, it can be incorporated into the gauge symmetry of the action. If we do the following. If we say s is equal to minus d mu minus i a mu. And i a mu. I may have put the sign wrong. We'll check in a minute. Phi is phi mod squared plus minus x squared. What I tried to do is to make an action that involves actually a gauge field and interactions between them. Okay. However, I tried to do it in a way that preserves the gauge invariance of the gauge field. That is, we change gauge invariance. Once we assign appropriate gauge transformation to our stolen fields. So what are the gauge transformation laws under which this is embedded? See, this would be embedded if d mu minus i a mu phi transforms homogenous and picks up a fix. We're going to dance with that in the Moscow. Okay. Under the gauge transformation, we see that a mu goes to a mu plus d mu lambda. Okay. So now, if phi times e to the phi lambda and goes to phi plus e to the phi lambda. Okay. Then this whole thing is clearly embedded. Okay. Because when we design options, right? When we act on d mu and e to the phi lambda of x, we become an i d mu lambda of x. When we act on a mu, we just become i d mu lambda of x. That will cancel. That's the minus sign. Okay. Now, we could have a number here. A number is called a coupling constant. Okay. So this part of the action was gauge advantage. This part of the action is now also gauge advantage. Okay. So if we want to write down a theory of scalar field interacting with the gauge field. Okay. Firstly, it's impossible to do that, at least in this way, if the scalar field is real. We need it to be complex. We need the scalar field without this conflict, without inducing the interaction, to have a global symmetry. This global symmetry is that of u1 phase. And then, I need to find the phase transformation here, with the gauge transformation of the gauge field. Gives us an action that has an invariance, that is gauge advantage. Now, let me start with a little experiment. Why are we solving up on this gauge advantage? You know, in the last class, I started trying to discuss the Hilbert space interpolation effect we knew. Somebody who is not familiar with the last 100 years of physics, my name said, why don't we start with the action? Isn't that the natural generalization of the scalar action factors? Why are you, you know, with this crazy thing of exact values? Should we start with this part of it? Okay. What is this part of it? In some way, I know the Hilbert space of this part of it. Sorry, this is the natural. What's the Hilbert space? Four scalar fields. Okay? Are all the four scalar fields identical? What in what sense? You know, do we get four, if we get four regular scalar fields, this is the fine theory. Because each scalar field gets your fine theory. What are the scalar fields? Is any one of these scalar fields special? Zero special. Zero special. Let's try to understand why. Exactly. Firstly, we should have said that s is equal to minus of this. Okay? And now, secondly, I said this very carefully. What should I say? We have one lower x in there. Now, what's really right in terms of lower x? You see, this thing is arranged. So, that might be to del 0, yeah? Del 0, yeah? The del 0 term is to plus or plus. So, what we get is del a i dot squared minus grad special a i star of a squared. The action, the maybe I wouldn't want to talk about it. So, the action for the a i is going to be this. Nice. So, just reasonably, I find that the first action is better. Okay? But what about the action for a zero? You see, here, the two minus signs cancel. This minus sign remains. So, the action here is half. Zero dot squared. The minus. Okay? Plus plus l i is zero. That's the problem. With associating elements based interpretations. This action is going to be the most straightforward. The most straightforward one is now if we compute the Hamiltonian. You know, we compute what trace we can get. For these guys, we will get h is equal to sum i is equal to 1, 3 a i squared plus pi i squared. What do we get here? It's exactly the same as before, except it's a minus. For here, we get minus of a zero squared plus pi i to zero squared. And now we have the strange space where the Hamiltonian is unbounded. Doing the free field theory, we'll get it. Okay? But, you know, we're not interested in just a free field. Once we introduce interactions, unboundedness from below of this Hamiltonian will mean that excitations here can constantly lose, you know, constantly stop rolling down there. Okay? This theory will be sick. There's another, we can try to cure this problem with the Hamiltonian by changing A and A dagger. That leads to controversy here at Hamilton. Okay? So basically, this theory is hard to make sense of if we preserve Lorentz invariants. Those Lorentz invariants that forced us to make relative signs between these two, these two guys minus. Okay? So we want to make a Lorentz invariant theory of a massless vector field. The straightforward way of functioning does not work. Okay? What does work? What does work is very interesting. A is zero and giving us trouble. So what we need is to make an action when A zero is at the ground level. And it didn't look like a Lorentz multiplier in the action because it came with a quadratic difference. But once we went to the appropriate Hamiltonian, that means the Hamiltonian for the other three variables, it was purely linear and A zero. See that I should have emphasized, that was a very important part of our equation. But the Hamiltonian in A is linear and A zero. Therefore, the integral of A zero will give you a project. Okay? So what we need is first to make this action. Now, you see, this projector commuted with the Hamiltonian because it's a projector onto a space of an operation that commuted with the Hamiltonian. It was an invariance of the Hamiltonian, variance of the Lagrangian. So this whole procedure was consistent because we had this gauge in the case. Okay? Here, if you look at the problem, if you fix A zero with zero, you have to do... What do you mean fix A zero? And I've got a path integral. I've got a path integral. Yes. That gives me a term. You know, you can't... So we don't have a gauge choice over here. Unlike the previous... You don't have a gauge choice, but I'm going to emphasize, so far I've never used the word gauge choice. That's not the way. I don't think that's the way to think of these theories. Yeah, you know, there's no choice over here. There's no gauge in there. It's unlike the previous. But even in the previous case, we did not fix any edge. Yes. We looked at the path integral in the interval. Yeah, look at the path integral. This is what we get. This is life. You can't wait to know about it. Okay. So... This is where I give you a summary. There are no no-goal theorems in this respect. People who have no-goal theorems always make assumptions. And those assumptions may or may not be correct. Okay? No, it's very dangerous. No-goal theorems are very dangerous. Because they often shock your imagination. Okay? So, one, I mean, I want to claim that there's a no-goal theorem. But nobody's ever managed to make sense of this theorem. It doesn't look likely that anyone will. Okay? However, once we modify the action to include these gauge invariance, two things happen simultaneously. You know, we named a Lagrangian multiplier and it projected onto the subspace of gauge invariance, which existed because the theory had a gauge invariance. So, if we now want to look at the theory of this gauge theory interacting with something, as we do if we want to discuss a non-bored theory. Okay? Then, we want to look at this interaction built in a way so that it does not destroy the gauge invariance. Because if you destroy the gauge invariance, unless something that nobody has seen for 70 years happens. Okay? You're going to land up to the sixth category. That's why we were so eager to make a theory of the gauge field interacting with the scalar field in a way that results in gauge invariance. Is this correct? Okay? So, so now we've got this theory, and I just want to, I'm spending too much time on this. But I just want to give you one minute on the Hilbert space interpretation of this theory. Okay? So, let's play our same game. What do we get? We find integral, suppose we do the path integral over phi by star a. That gives us Hilbert space. We have functions of a i, we have functions of phi, we have functions of mytheism. Okay? We're still left with the path integral of a. Okay? We left with the path integral over a zero. Okay? And what do we get there? Okay? So, once again, all we have to do is to evaluate, is to evaluate, and then we do it. What is the canonical moment of contradiction? We've done, this path is unchanged. Right? From this path, what do we get? z to the power i h t, where p h to the equal to phi i squared 2 plus m i h t squared. And then we're going to get contributions and then plus, what was it, del i is zero phi i. Then we're going to get some contributions from here. Okay? What do we get from here? From here on, what is the momentum conjugate of phi? Suddenly, let's call d zero minus i is zero on phi. Let's call that, let's call this d zero. Similarly, we can get the name d zero plus i is zero. You see, we've done mod squared. It's the i transpose. Now, we're going to call that d zero. Okay? So, phi phi star is equal to, is equal to d zero over phi. And phi phi is equal to d zero phi star. Is that clear? Let's see what the Hamiltonian is in this segment. Okay? Of course, there is del i, d i phi star, d i phi. That's that. Then there is d zero phi star. Phi star dot d zero phi. Phi phi, this is here. The full Hamilton is the sum of these terms. Plus, plus, plus, minus. And write this phi dot terms of phi. Okay? So, this now is equal to the strength i phi i phi star. Then subtracted something here. So, we subtracted, so we should add it again. So, plus i times a zero phi phi d zero phi star minus i phi star minus i a zero phi star d zero phi star. Is this clear? What's the net confusion? Net confusion is not the full Hamiltonian. Net confusion is not the full Hamiltonian. h is equal to f i j squared by 4. That's d i part, d i part, d i part star. Okay? Plus d i phi star. Plus a zero into a zero into phi i. a zero into phi i, phi i. Plus phi star minus phi star. We run our game of finding a Hilbert-Pitzen interpretation. Okay? We have to see how this, you see now a zero acts not just here. But also here. What are the wave functionals in our case? Wave functionals are functionals of a i, phi and of course phi star. The phi is complex. If your phi depends on your phi star. Because these two, you see, how does, on this wave function, what is that the generator of? Okay? This is the generator of phi goes into e to the power i. It's just, every time you see a phi, let's look at it. Let's look at it. Every time you see a phi, you replace it by, you kill that phi. That's what phi, i, phi is. You replace the phi with a phi. And then you have an i. You do it with an alpha. That's the phi goes into e to the power i, phi, i, alpha, phi. Every time you see a phi star, you do the opposite. You don't understand. So, instead of phi to e to the power i, i, alpha, phi, a, z. Phi to the power i is i, a, 0, phi. Phi star to e to the power minus i, a, 0, phi star. And a, 0, a, i to the power a, i plus del i, phi. You see, once again, this is precisely the action of a gate transformation, including all the matter fields. Once again, our Hilbert space interpretation is very simple. You have a simple algorithm. You have a simple algorithm. You have a projection. And the projection is the projection until gauges vary. Why is a phi star so matching with the algorithm? What do you mean? The way you said it. Look, phi star, phi star, acting on phi star won't generate this. Only acting in this way generates. Wait, I didn't understand. This is an operator acting on a wave. We want to know how it acts on a wave function. Okay? See, this is a... We understand, now we understand both how... How the quanta is gauges? What's the path of technical quantization? Straighter field is? Or a U1 gate field is? U1 gate is straight. Okay? So that, you know, before we declare victory with our understanding of basic path of technical quantization, there are two kinds of fields we haven't yet done. The phobia of fields. And there are not a few variations. Oh, let's... Let's study them one by one. We'll complete our survey of basic fields of field theory. Then we will... Then we'll study the path integral a little more to see where it takes us. And then we'll move on to more. Let's... Let's... We'll move on to study specific fields. Sir, this last a zero down the... The bracket kind of looks like the gene. Very good. Very good. Very... It is different. It is different. Okay? And that's a general rule. That the way that a zero in the Hamiltonian interacts with... with additional degrees of freedom is by the coupling a zero to infinity. Okay? Now, that is a... Why is that equal to what we said? That's equal to what we said because j zero is precisely the operator that generates the local symmetry. Okay? So, this is another way of saying what we want to say. Since this is the generator of the local symmetry, exponential action of a zero times that generates the finite symmetry which is equal to the power i of... five or six times that. Excellent. Very good. Other questions or comments? Okay. Now let's turn to the path integral quantization the path integral representation of... Now, as you know from your standard view, when you try to make a field theory based on the Dirac equation, okay? As you know from your study of field theory, when you try to make a field theory based on the Dirac equation, you end up getting canonical anti-computation. Part of the canonical computation. Okay? You get some sort of rule for five bar in the Dirac equation. It's five bar anti-computation in the commutator five. So, what we want to do is to study a path integral that naturally represents the evolution operator in a helfer space that has natural anti-computing operators. All helfer spaces are commutative operators. It's not a big deal. But we want to find a theorem that's true. So, you want to start with a study of very simple helfer space. Now, a very simple helfer space has very simple and familiar anti-computing operators. The helfer space that we study is the two spaces. Okay? The helfer space has two states. That's how this is spin-off. So, it can be... This spin-off doesn't mean be any real space. There are two states. That's better. So, it's a two-dimensional helfer space. You'll agree with me that you can hardly get simple. It's only one negative helfer space. That's... Okay? Okay. Fine. Now, on this helfer space, I'm going to define two operators. I will define the operator side. I will define the operator side so that it is the... It is the lowering operator. So, it starts now. It's equal to zero. The downside is that it's lower. So, I am up. Okay? And I will also define the operator I think of Ulczynski's clothes and tie. Ulczynski's appendix, the one I described to you yesterday, by the way, is a very useful reference for what I'm talking about. Okay? Exactly. I'm just legal-detailing what he has to do with appendix. I will define also an operator, Kai, which is Kai on... Oh, so this is the raising operator. And then zero. And Kai on down. Easy. Okay? Now, it is which in the familiar two states system are familiar names. Can you tell me what their names are? What is this one called? The lowering operator. I was thinking of even some of you. One so sophisticated. We've got two states system. The matrix representation of this operator is one cross one matrix. Two cross two matrix. Two cross two matrices have a famous basis. How many matrix? Which boundary matrix? Which linear boundary matrix is this one? My left side is the same part. My right side is the same part. Okay? So Kai, Sai and Kai, a fancy way, a fancy name for the operator, Sigma minus and Sigma plus. Okay? But we're going to call them Sai and Kai because we call them Sigma minus and Sigma plus. We won't see their analogy. I mean, they don't confuse us. That's all there. Now, what is the commutation relation? That's Sai and Kai. Okay? The relation is thus. You know this from Pauli's matrices, but we can just check it. If this is true, it should be true on any state. Now, normally it's tough to check state by state because there's a number of states. But here there are two, so we can just check and state by state. Okay? Let's look at, let's look at the action of this and that. So these are two terms. What is this? This is Sai, Kai plus Kai, Sai. Okay? Kai, Sai, and I need to start with Sai. Sai, Kai. Kai takes down to up. And then the Sai brings up, down. So this takes down to down, and it's easy to see that it also takes up. Right? So this is the identity operator. Sai, anti-commutator Kai is the identity operator. And so our two operators, Sai and Kai, obey anti-computation, canonical anti-computation relations. So, the question we're asking is this. Suppose we've got, on this Hilbert space, we've got an illusion operated by my IAHT. Where H is any function of Sai. That's a problem for some Hilbert. The way you do foundation by promoting Poisson brackets into a commutation relation, you do have a picture of it. Where are the Poisson brackets? We'll see an analogy soon. But at the moment we can't, all we've done is look at two-state system, and see that there are two, there's no classical, there's nothing. The aim is to build a classical formulation of the two-state system. Okay? It seems like a crazy thing to do, because two-state seems very quantum. Okay? And the aim is to somehow find some way of thinking of the two-state system, physically. And we're going to find, you know, that you can't do it not in a way that you can do it in some form or another. Hang on. Ask the question. It's a great system because, you know, because, because it implements static-computation relations. But notice it's not far from what we want. Why is that the case? You see, you remember that in the Dirac system, we've got anti-computation relations that look very complicated, you know, that some feel anti-computing with some other feel. But you go to the Fourier basis, and the anti-computation relations just become a alpha p. It implements precisely this combination. More than I would, the Dirac system is precisely a two-state system, because of boundary extensions. Everybody's either of you. Understand how to work with two-state systems. Ramping up in the Dirac system is very easy. So the two-state system is to fernia with path integrals. What the theory of, you know, 1x was to bosonic path integrals. Yet the simplest path integral, which once you understand completely, allows for trivial generalization of integrals. Let's, let's take a look at this path. What we can do is a lot of form is not possible. So let's see. The first thing we do is to define the state. We define the state psi, to be the following. Psi is equal to, and we first define it as a matrix. It would be a normal state in a, in a, in a space. Psi was a complex number. Now we're going to do something really crazy. Let's make one happy. Operators, position operators, sometimes they're denoted by x, and its eigenvalues also be denoted by x. Okay? But what? Why? Crazy something. What I'm going to do is to say, let psi, not be a number, but instead be an integral, an abstract integral, and its rules. And then we'll think a little more about that. Okay, but first, hang on before you, before you jump to the conclusion that there's no mathematics in your body, physics instead. Hang on, you forget. So, define some sort of algebra of anticomputing. These anticomputing numbers are, let's say, psi one, psi two, psi n concrete. The algebra of these numbers is just the following. They anticomputed each other. This is psi one, psi two. This is the psi two. The one psi one. As you can remind, psi one, psi one. I mean, this is psi high, actually. Which is equally, if you're not just abstract definitions, you're talking about flow. I will also define, I will also define, an integral of f. Okay? Find a strange rule. The integral of f is, let's first leave with one number. That these psi ones are different. These psi, psi of psi, that's the definition of it. Okay? The first side is psi i, side is my psi j, side is my psi j. Thank you. So, that's my definition of, of the integral. And I'm going to, just as a notational device, also define psi e psi, minus of e psi e psi. Pure notation. The position of the d psi, is important. Just notation. Okay? I also define differentiation of these. By the way, why did I stop at defining, integral of one and integral of psi? Why not? Why did I also define integral of psi of psi? Because every function of psi is a linear notation of one psi. Because psi square is zero, therefore psi q is zero. Okay? So, if I know how to delete one psi, I know how to delete this. Okay? Then, d by d psi, is the value. d by d psi of one, is equal to zero, d by d psi of psi. But these definitions, are pretty strange. In particular, the integral of one, is the same as the derivative. And the integral of psi, is the same as the derivative of psi. So, integral and derivative, are basically the same thing. Okay? They are just such definitions. Not the fact that integral and derivative are basically the same thing. Can you just say, that these definitions obey the rules of partial, because integral times derivative is derivative of square. Derivative of square of any linear function. For every function of psi, derivative of the psi term is one. And the one term is zero. Okay? Crazy definitions, but okay, let's go. Last thing, more conventions. If I write, theta one into d theta n, these are all, and you're getting them. Then, theta n, theta n minus one, theta one. This guy, this theta, kills the theta just next to it. It's important that I have the other order. Have you ever got my answer? It's the rule for integral, the position of these d's. And the d hits the theta. Yeah, that's good. That's good. Okay? It's, let's step back a little bit. And I'll ask, if there are any familiar objects that we could make, then obey the exact commutative and the constant. And just to reassure you, just to reassure you that that this is indeed the case. Okay? If you are in a deal, a familiar representation of this action, it's very easy. Okay? Just to reassure you, this is not some crazy thing that you know. How do you implement this algebra on a computer? If you didn't know how to teach the computer the abstract rules, but you wanted to do some Splinter or some anti-commuting algebra, Mathematica, you're doing the functions that Mathematica already has. Okay? What do you do? Well, what do you do as something such that all these guys anti-commute to each other? I think we're just one side. Okay? We already know how to do that. Work in a two-crossed two limits. Okay? Psi is equal to sigma plus and chi is equal to because we want them to anti-commute. They may be better, they may not really view this way. Work in a four-crossed two limits, please. Okay? That has the tensor structure of v1 tensor, but we want to review both the two questions. Okay? So then the representation, for instance, that psi is equal to sigma plus tensor sigma to v. chi is equal to sigma. That's it. I'll get this right in a minute. Identity tensor sigma plus this thing squares to zero because sigma plus squares to zero. This thing squares to zero because sigma plus squares to zero. This part will be like this, this part will be like this, so let me know. So these two guys anti-commute to each other. Okay? So something like this. And then, if you want to build many different, if you want to build many different sizes, if you want to build many different sizes, then you're going to build by tensoring more and more of these things. And if you try it out, you'd be able to find representations of this derivative and integral by appropriate traces of these operators multiplied by appropriate things. So why do we go to this space? What? Why do we go to this space instead? I said, what I want to do is not going to be about anything. What I want to do is just to find a familiar representation of this action. We define these objects. We define these objects that have this algebra. Now you can ask, is there a, given an abstract algebra, it's usually looked for representations of that. What's the meaning of the term basis? There's some matrix representation. Can you find some Hilbert space and some set of matrices that represent the algebra of this activity there? And I'm saying it's easy to do. You can do it. This way, the way I looked at it may not have been the most accurate. But you can easily find matrix representations of the algebra. But one thing to note is that as you, if you try to think this representation and enlarge it into more and more and more anti-converting variables, each time you add a new element, you'd answer products of new space. So the representation that you build will be in a space whose dimensionality will grow exponentially with the number of acting in your area. So this representation by concrete will not be easy to work with. It will not be easy to work with when you have a significant number of anti-converting variables. But, you know, if you did this on the correct version of this, the best version of this, and implemented in Mathematica for three variables, any expression that you have in terms of fights, you can let Mathematica's matrix manipulation routine. So I'm all I'm saying is that these anti-converting variables are not that unfamiliar. You can just think of them as matrices. What's that? The student representation of this is the one that we start with. Say, if you put sigma plus untying sigma. No, no, that doesn't happen. We want that to be untying. And those matrices anti-converting. That representation will be size of the sigma 1 and size of the sigma 2. This will probably be the signature. No, no, no. Sorry. Sorry. Sorry. Sorry. Sorry. Sorry. What are you saying? This will do about how many values variables can take. What do you mean values? Like, here, any variable sign can be either identity of sigma 1 or sigma plus or sigma 3. Well, like, can you, if you could find more matrices than anti-converting, would you allow that to be what's the value and what's the value? Maybe you're asking whether every operator in the single space can be built as functions of these matrices. Is that, I didn't understand the question. I didn't understand the question. I have a question before that. Do you think, suppose you want to do anti-converting matrices. Suppose you find, I think you find four here. If you define four here, on this space, you will be able to build the algebra of psi 1, psi 2, psi 3, psi 4. But if you use that space to build the algebra of just psi 1 and psi 2, then what probably would be the case? Yeah. Okay. We could ask the question. I probably have done the most efficient things. We've thought about how we could do the most efficient things. I probably have done the most efficient things. I'll tell you next class what's the most efficient things. Hey, I probably have done the most efficient things. But you could ask in this space, do functions, arbitrary functions of psi 1 and... We could start with just one way. Can I give the question? Yeah, please do. Wait for one next one. I will change it. That's very abstract. Not connected to the energy. Let me start with one way. Okay. Psi, it's easy to understand just sigma plus. Okay. That's clear. Okay. And now, any function... So, we could have identity and sigma plus. Identity is the representation of 1. Sigma plus is the representation of psi. With one variable everything's very clear. Sigma plus squared is to 0. Okay. Now we're going to do variables. Okay. That's why we look at this tensor product. Now you might have thought, let's just do everything, identity or one space. You know, let's just do really a tensor product with these two representations. But that won't work because we want the tool, the size to mutually anticommute. So, that's easily achieved. All you have to find is one object in this space that anticommutes just this guy. An object, sigma 3 is an example. Is an object. Okay. And therefore, if you do, psi is equal to sigma plus tensor identity. Guy is equal to sigma 3 tensor sigma plus. This will work because this, we now make sure that the two mutually anticommute. Okay. If you wanted one more, so say we had psi 1, psi 2. If we did psi 3, we do a tensor identity, tensor identity. And here we do sigma 3, tensor, sigma 3, tensor, sigma plus. Now, all these squared is zero because sigma plus plus is zero. All these anticommute because let's look at this guy and this guy. Okay. These two sigma 3s commute. These two guys commute. This guy anticommutes. I think this is probably the most efficient representation. Okay. It's clear that if you generalize this to, to emphasize. But, okay. So, that is the representation for two variables. We could also write psi 1 is equal to sigma plus tensor product with sigma 3 and psi 2 is equal to sigma 3 tensor product. No. If we did that, if we did both sigma 3 to become it. No. Not both sigma 3. Sigma plus. Two sigma 3. So, there is a difference between the, between the. No. We're not saying it's a unique representation. All I want is some representation that implements the algebra. Yeah. Notice. Notice that you cannot build every operator in this. I should have done this for you. Like, notice that you cannot build every operator in this, in words, space as functions of size. Why not? That's one of the dimensions. What is the dimension of an entire space? If we were n size, It's clear. It's root to the n. That's the dimension of the inverse space. What's the operator space? Our operators you can deliver is 2 to the n cross 2 to the n matrices. So, that is the net dimension and we have to operate this. What is the net dimensionality in the space of functions you can build out as psi 1 to psi? Well, any function is a monomial, it is a polynomial. Psi 1 is either present or not, there are two options. Psi 2 is either present or not, so there are two options. 2 to the n times 2 to the n possible operators on this unit in that space. Only 2 to the n. A built out of functions of psi. This is why it is pretty inefficient representation. But I don't think in the matrix you will do it or maybe you can't approach it. This is one representation, not a terribly efficient. If you make much larger spaces of operators than you, then the things you then knows that you will ever need them. If you play around, you will be able to find representations of the integral of the derivative operator in terms of traces. Okay, for instance, one variable, multiply by sigma minus. You represent these rules in matrices and traces. It is not a big deal. However, you don't need to. The rules stand for themselves. And remember, never represent them in terms of matrices unless you get confused. If you get confused about what something means, this is something to do with it. Because it's not correct. It's not something I'm trying to do. Okay, so let's move. We started this whole discussion because I wanted to behind this stage. And in this stage, the psi is a property of psi. First property, psi had times psi is what? Do you remember psi was the lower operator? So psi kills this. Psi hat kills this. Psi hat makes this object. Psi hat is some operator. Let me call this here. Psi hat is just property of the actual Hilbert space. Okay, but I'll try it as far as possible without moving things. So psi hat is some operator that acts as down and up to give you down. That is exactly equal to, you know, the only down that psi has, right? But psi squared. The next thing I'm going to do, I'm sorry, I'll take ten minutes. Does anyone have a class they have to go for? I'll take ten minutes to finish this. The next thing we want to discuss, the next thing we want to define is this object. Now, I define it in the following way. I define psi is that psi is i prime is equal to psi hat. I have a representation for this psi. But even before we do that, even before we do that, let's find modulation of this object. You see, time is a psi question. Let's just check it. The most general function of psi is a plus b psi. Because b psi, the path that is psi prime a is zero. Because there will be integrations as d psi is not zero, leave it as a psi. Okay? So this b psi, the path that is b psi psi a is not zero. And that according to our rules gives us a. What about this guy? This is d psi minus psi prime that b psi. That's the only possible not zero, right? So we move this through that. That gives us minus. Okay? And that then gives the sign. Okay. So this is very similar to the. Exactly. Exactly. Very good. So that's the reason we wanted to make this definition. That this object here is the fermionic representation of the object. So we want to find those states that are, you know, have this orthogonality property, like in positions. Now, if we implement usually no product between up and down, that is, up is orthogonal to up, is up, down in a product is zero, up, up is one, down, down is one. Actually, in what I'm doing here, I don't really need that. Just to be familiar. I'm going to implement that. We find an explicit representation of the sign. Let's find that. So what do you see? Does this not indicate that no, no sign is, yes, yes, yes, yes, yes. Except that this psi will not be the hermitian conclusive sign. Let's see what, you see this definition here. Okay. It's a definition of this graph. Okay. And maybe it's a view of notation, but it's not really the graph corresponding to the case. And it's not under the. I got it. Change the invertebrates sometimes. It doesn't seem to work in the normal to do it. What does it do? No, I do it like this. We don't see it. This is, just think of this as a name. Same name. Yeah, that's right. That's the thing is the invertebrates, are we excited in some ways or not? Same space. But we're not, you see, what may be causing the dispute? With the usual inner product, this psi is not the, what is it called? The corresponding to the states. It's just some different. Let's work on this. This psi, wherever it is, is equal to some a sound plus b. And we want this relation to be true when we take this object on plus f and psi. So the first thing that we want is that we want to get a psi. So that works provided this is f. Again, so that f goes up and goes to psi. And then if we wanted to get a minus sign, right? That works provided this is minus sign. So with the usual inner product, with the usual inner product, psi is equal to minus sign. That's probably the sign. Psi prime is equal to minus psi down plus sign. This is with the usual inner product, okay? Actually there's an inner product that's better suited to this location. And it's better suited to some applications. And everything I do here actually is independent of what this inner product is. It just depends on this definition of the problem of psi. But just so that it doesn't get too abstract, we'll use the familiar. So that works. I'm sorry, this is going to sound a bit crazy until we get the answer. May I go today? I don't think that's easy to check either with or without the use of the inner product. Let's check it without the use of the inner product. I mean without the specific representation. The inner product will be sign there, minus sign there? No, I wanted psi, minus sign there. If I wrote psi, oh, I've done wrong, right? Sorry. I wanted psi, minus sign right here. So I've done this one. Because I wrote this as psi prime and this is... Sorry. It's with this that I should try to identify. Yeah. I should now try to identify. So you say I should have this as plus and this as minus. Thank you. Doesn't work now. Thank you. Let's check the list. In order to check it's true, what I'm going to do is to act it on the state. If it's true for every psi state. So let me act it on the state i. Psi, psi, psi at the end. Yeah. There should be one Psi. Thank you. Psi, psi. Let's act it on the state. Psi, psi. And remember that Psi in a sign minus sign right? Was just this part which do the thing. It's some function of psi. Function of psi. Okay. Now to find the matrix elements. Psi, psi prime. So the operator Psi hat and psi hat. First the operator Psi hat. That's very easy because we know that Psi hat and Psi. And Psi prime is equal to Psi prime and Psi. So this is equal to Psi Psi prime. The question is how does it work for Psi? For Psi. Psi is integral of Psi. Exponential of Psi prime and Psi. Let's do the left hand side. We have Psi, Psi hat into. Now let's put it in the definition of Psi prime. So the definition of Psi prime was down plus up. That's Psi prime. Okay. So remember what I know about Psi hat? Psi hat is equal to Psi prime. Now this is Psi times up. Okay. Plus zero. Because I have zero on our places. Zero. And then what we need to know is what is Psi of up? But this you need to check is minus one. So the left hand side is a fancy. How do you check by the way that it's minus one? Either by using the explicit form of this Psi prime. Or by just expanding on Psi and Psi prime. Expand that Psi prime is up plus down. Equate. So, right. So this left hand side is just a very fancy way. The same minus one. Okay. The right hand side is also very fancy way. Okay. Let's check. I'm sorry this sounds very unmotivated. And we'll for another five minutes. Then we'll see why it's way different. Okay. The right hand side is also very fancy way of saying minus one. Let's check that. Okay. Let's just expand that. So right hand side is minus P chi chi. One plus chi Psi prime minus Psi. And then the Taylor expansion stops. That's what the exponential means. It's Taylor expansion. Okay. P chi chi into one is minus one. P chi chi into chi is zero. So this is two edit. Okay. One to verify is that also you put a minus of P chi exponential of chi into Psi prime. And then I didn't have the chi at me. And I don't put the chi. Let's check that. This we know. Can somebody remind me was Psi minus Psi prime or Psi prime? Psi minus Psi prime. Psi minus Psi prime. Let's check that it's equal to that. So that's minus P chi into one. But that's zero because integral of P chi into one is zero. So if we get the one. So it's just a linear term. So chi into Psi prime minus Psi. That gives us one minus Psi prime minus Psi Psi. What have we computed? Because any function of chi hat. Since any function of chi hat. Since any function purely of chi hat is a linear function. Because chi hat squares to zero. As an operator. We've concluded that Psi times f of chi hat is equal to minus Psi of chi. And of course the much more trivial statement. That Psi times g of Psi hat. Is this clear? What should you be thinking? What should be a hat? I know you might remember that when we discussed photonic positive. We had x as x prime. And it was difficult to know what to make a function of momentum. If we had some function of momentum. The momentum operator. If we had some function of the momentum operator. What we did was we made x. f of P times and then inserted a complete set of momentum. Then we let this just be f of P. So this is f of P times. Then we got e to the power i p x minus x prime. So we had this formula that any function of momentum. Between two position eigen states. Was equal to that function of momentum. And z to the power i p x minus x prime dp. We've got an extremely similar formula. Any function of chi hat. Any function of chi hat. Between two side states. Is equal to e chi f of chi. Exponential of chi and chi prime. Now since we've gone so late. What I'm going to do is to de-foil the rest as an exercise. I'll tell you what to do. I'll ask you to do it before next class. I'll tell you my answer later. What do we want to do? Now what we want to do is clear. We've got this for the only part of the integral. And we've got this for the high-evaluation of the integral. We do exactly what we did for the result. We break off time into something. t0, t1, t0, t1, 1. We break this up into product over n. e to the power minus i h. Exactly like we did. Let's suppose the one we want to evaluate. Is a psi. Let's call it mu. Suppose we're going to do this. Now we break it off into these operators. We insert the identity integral d psi psi. This is the fact that at each time step, h is a function of chi hat and psi hat. And then because we have input decimal time steps, we can separate it into an exponential of some function of chi hat that functions like that. Use it in this case for that. All functions are equal. So you do that and then use these relationships. So these Hamiltonians are each chi and psi. The path that is the function of psi hat. This is psi, psi hat, psi prime. f of psi hat, psi prime is equal to f of psi prime. And for the kind of Hamiltonians, use this problem. What do we get in these? So in each step, we have to introduce a path integral over psi for the completeness formula and a path integral over chi to represent the matrix element of chi. So at each time step, we have an integral over psi and an integral over chi. This here in the identity of remember just like before is going to become chi psi dot. After signs, I have to keep very careful track of them. You see what you are going to get is d psi 1, d psi n, d chi, chi 0, d chi 1, chi n. An exponential of chi psi dot minus h of chi psi. This was really, just didn't happen. Carefully actually this has minus sign because this was minus sign. This was beginning in this way. This is in fact correct. We just have to check that I got the integration. Up to plus or minus this is correct. Which I will ask you today. Is the path integral representation for the two states? Now, because the two states is the first order system, there is actually no distinction between the Hamiltonian and the Lagrangian combination. I mean sir, when they say there is no further representation. The point of the Hamiltonian combination is to make a second order system a first order system. The point of Hamiltonian class in mechanics is to take Lagrangian class in mechanics which is second order and make it first order. If you have got a set of equations of motion, they are already first order. There is nothing left to do. So this is the fundamental answer for this. It's not like, you know, in the case of the Bosonic thing, we got an answer like this and then integrated out the cut. We are not going to try to do anything about it. We are just going to stop this. This is the thing that I am going to make contact with the path integral representation of the data architecture. What is the basic idea? If you have two anti-commuting variables, one of which has a time derivative, the other is not. That's first order. So what is the translation formula? The translation formula is that this path integral actually implements the transition amplitude of the Hilbert space. What is the Hilbert space? The Hilbert space is the Hilbert space that implements time and psi hat. What is the Hamiltonian for the Hilbert space? H of chi outside. Now we have done this with the two-state system. But if you had two-state systems, we could do something extremely similar except we would have some of this I am going to call path integral over. You would have path integral over psi 1 and psi 2, chi 1 and chi 2. And once again you would represent, you know, the algebra that went, this is equal to, let's say, psi 2, psi, okay. In general you would represent this. M n is equal to n times. This would work for two-state systems. M two-state systems. You could work for two-state systems. But as you guys know, the Diracian language, at least free Diracian language, actually in terms of, that's because the Diracian language, when written in terms of, is precisely of this form. An infinite number of which I am going to cause. Or even a position basis. Again, it's precisely of this form. It's got two anti-commuting variables with first order and chi. So, what we can do is following. If we have done the Diracian, we'll come to this detail in a second. Sir, can I just, sir, can we just forcefully do the chi integral and we could, in principle, get the Diracian. No, no, the Diracian is Diracian. I'm going to go for it. See, suppose we have the action. S is equal to psi bar d slash L is the value of i and so on. Plus, the Diracian is just focused on d bar. Okay? This, if I do the part integral d psi bar, d is the exponential of i integral of psi bar, then it's working on that space that implements the following anti-commutations. So, in regard to the chi bar, the psi bar as the envelope of the sky, this side as the envelope of this sky, the gamma zero here, as you, you know the usual thing, right? The gamma zero kills the gamma zero in the bar. Okay? So, this will give us the combination, psi dagger, psi bar gamma zero, let's go. And x psi at y would be equal to delta l. If we implement this with psi bar and psi, we have the anti-commuting part. One part integral at each point in space, separately for psi bar and psi. And then also, we have to deal with this. We come to all of that. Okay? But this is precisely the anti-combination you guys used when you quantized the Dirac field in your first study of this subject. So, what we have concluded is that the path integral representation of the Dirac action gives us what you guys used was the correct thing to do on whatever you want. So, you take this path integral to be a path integral over anti-combination. Is this clear? This is the main point I want to make because I am slightly more in algebraic class. We're okay. Okay. I think I'll stop here for questions. Sir, the difference between this and what we did earlier was that we had assumed the anti-combination we had just written and what we did earlier was we assumed the ADCRs and then we do whatever. Over here, the difference is that these come out as a result of the path integral. Yeah. Our analysis of Dirac system is similar to the analysis of the electro-combination system. What we're doing is this. We're saying I give you this path integral. I tell you it's a path integral of anti-combination system. And then I ask you what is the right inverse phase interpretation of the path integral? Given this path integral, since we understand how to interpret the path integral over a single anti-combination relative side time, it's a simple generalization of that. So, we know that this path integral went slightly higher so we know that this path integral went sliced gives us the inverse phase that implements this action. On that inverse phase, it computes a transition attribute. Okay. What is the transition attribute? The transition attribute is simply exponential of e to the power minus i h where h is that part of the of whatever enters here that is independent of that. So, this is a standard analysis within your path integral. Meaning, when you guys worked out canonical quantization of the development action, this was your answer. So, what we've done is started with this path integral. We sliced it and found the path integral space interpretation and we find that it's exactly what you guys are familiar with. So, if our goal is to find, you know, so, if we, you know, in my way of thinking the path integral is often both more convenient as well as in some ways, actually more fun. More convenient than thinking of starting with canonical quantization. No. So, path integral defines a quantum theory and that's great. But, you can't be satisfied with the same that defines it. You know, that's a very important structure for the mechanics that the Hilbert space family. So, if you write down a path integral, the first thing you have to ask is what Hilbert space is that path integral? What is the Hamiltonian there? So, what we did here was to write down this path integral base of the family and the iraq action. But, declared as a very effective way. This was our definition of the path integral. And then we asked, what system is it? What is the Hilbert space? What is Hamilton? And we found that the answer is exactly what you guys studied when you did this. Okay, so this is the path integral representation. Of the iraq action. Okay. So, we'll talk a little more about this next class about the iraq stuff. A little more. And in the next class, we will either discuss non-ambient case theories or perhaps let's discuss non-ambient case theories. So, we have completed a survey of the kinds of path integral. And then we will study the path integral. Any other questions? Yeah.