 That was so roughly interrupted last time. Okay, so I'll remind you where we were. We were discussing 5 to the 4th theory in 4 minus epsilon dimensions. Space time dimensions. So here is 4 minus epsilon. And we were discussing the equation of the beta function for the 5 to the 4th boundary. And we found that g would be 4 by dt, where I scale t as in our earlier discussions, flow towards the island. Okay, so it decreases. It is equal to minus, what was it? 3 by 16 by square d squared. Okay, that's a good result. You remember this? You remember we plotted the flow lines, yeah? And we solved the class, of course. We solved all the flow lines, went like this, the g4 flow lines went outward, around g to the 0. That's simply the fact that this is just a positive. And then went inward around this numerator number. I can't remember the value of the minimum, but it was all right. Okay. Now, this was the issue, this was g4. There was also the mass square, so the flow lines went like this, for this guy. For this guy, it's clear that an order epsilon, something that's dimension, whose dimension of relevance, it was a dimension 1 operator, mass square term, will be connected mainly to one plus or minus epsilon. Okay, so another one. Do we clearly state that? So the flow lines were like this. The main part of interest was the fixed part. The main part of interest was the new fixed point. And we just started discussing why that was interesting. Well, it's interesting from many points. Okay, but stuff discussing why it was interesting from the point view of statistical physics. We described how several phase transitions, for instance, the phase transition, the Ising one as a function of a coupling, just a three-dimensional Ising model, just as a classical statistical element, as a function of a coupling, it's going to be SI.SG. Okay, for instance, that model has a phase transition, which plausibly is governed by, so when you're in the neighborhood of a phase transition point, by some kind of block spinning, plausibly, the fact that S is discrete doesn't matter. You get an effective continuous field. So it's governed by some path integral more than a scalar field of three dimensions. Okay, we know nothing else about that. There's some path integral more than a scalar field of three dimensions. And now, what do we know about this model? We know, sort of by, we know from various means, numerical simulations, blah, blah, blah, that mean field method. Many things tell you that as you change this coupling. So, the model was simply SI.SG. You understand the way SI.SG. And as you change this G, as you change this G, we've seen that there's, we know that there's a phase transition point. Okay, we go from a phase in which the S is, I have an expectation value, to a phase in which the S is, it's amongst zero expectation. Okay, magnification, it's a phase transition of better meaning. And we know that the phase transition itself is set in order. And in the neighborhood of that phase transition, there's no length scale. Things vary in very large distances. So, it's the neighborhood of that fixed point that can plausibly be described by a continuum. Because any chain in which your, your correlation functions are dying off, let's say, daily lattice spacing. We'll be very far from the continued phase. This continued phase there is no longer to the lattice. Something where the correlation functions are dying off on a million lattice spacing. It should be well approximated after the connection is for a one of a million of a second. By continuity. Okay. So, it's the neighborhood of the fixed point that should be well described by a discontinued phase. We're going to try to identify the phase. Can we get priority? Can we get priority? Well, you know, no. It's very hard to know without knowing anything. But, for instance, if you do a mean-trial analysis of the IZ-1, all of your balance, the very simply shows that it's the same. Yeah, I know that. Can you just see from here? No, no. You can. You see, given like a branch, you can, even to know whether there's phase transition on it, you have to do some analysis. Even with the knowledge in the phase transition? No, no, no. If, you see, the only situation in which the phase transition will be described by a continued phase theory is when the phase transition is same. Why is that? You see, it's like, let me remind you, in the Landau-Kitzberg picture, a phase, a first-order phase transition is something like this. You've got a minimum. As you change the parameter, the higher you get the other minimum, it increases. Sometimes it becomes equal to the value. And then it becomes lower. Steering transition from this to this. What is this angle? There's some border pattern. It's like a potential as a function of phi. Effective for quantum potential as a function of phi. Okay? Theories like to sit at the minimum of their effect of action. As you change your parameter, this effect of potential changes. And you make a transition from the theory-like and central to the theory-like and central. Now, this is a first-order phase transition, something called the discontinuous phase transition. What is called a discontinuous phase transition? So discontinuous because of the configuration. Phi jumps discontinuously from 0 to central 0. As opposed to a phase transition point. Okay? And this leads to a high-order discontinuity in derivatives of the free energy. The free energy is always like this. Because quantum potential is free energy. So the free energy transition is free energy. Okay? But the first derivative of the free energy is this. This is a first-order phase transition. Now, of relevance to us is the issue that when the first-order phase transition is happening here, this phase locally is the last phase. Both phases that you transit to are mass phases. Non-zero curvature of your potential. There is no particular reason to expect for the first-order transition any scale of variance in your problem around the scale that you have. This transition point. Now, let me remind you of in contrast the second-order phase transition. The second-order phase transition is like potential that you have. Then, at the phase transition point, the phi square term goes to C. So just phi of 4. And then, it happens. Infinity is really near the phase transition where it's just before the phase transition. The last-order phase transition will a little bit of a phase transition. Phi is changed a little bit away from zero. But not this transition. That's the second derivative of the free energy. That's the connection of the logarithm. But much more important to us is the fact that phase transition happens via the production of a mass system. The mass square is doing this. So, the theory of this scalar phi of the integrator of all the other masses will be described by we will have no length scaling. So, it's only in the second-order phase transitions in a lattice system that plausibly we have a molecule theory. It's clear that the first-order phase transitions down. All correlation functions are basically finite in lattice units. And so, there's no possibility that I will continue with this. Okay? Is this the same statement as the first-order theory? No. You see, here we have a mass-less theory. Even though there is no continuous symmetry between them, very different statements. Okay? In this particular case in the symmetry broken phase there is no mass-less theory. What is the goal-strength? The goal-strength theorem tells you that when you continue symmetry, in the symmetry broken phase you must have mass-less theory. Why is the goal-strength theorem difficult? You see, because you've broken your symmetry. So, there's some symmetry direction. That is the symmetry of the theory and the mass-less theorem. Okay? Here, the only symmetry behind is the discrete symmetry. 5 goes to minus 5. S goes to minus 6. So, there was no continuous symmetry because no goal-strength theorem. And in this particular case, there is no mass-less theorem. I mean, in the symmetry broken phase there's no mass-less theorem. Nonetheless, there is a mass-less theory in the phase transition. And what's very important here is that this mass-less theorem happens as you take tune J to J. So, it requires in the space of the UV theories of one parameter, they were near the phase transition point. But not at all. So, that will continue for the phase theory. Now, let's say the lattice-scale, the inverse lattice-scale, we're measuring physics at some lambda and this would be an empty state. Always measure much larger than this since it's not a theory. Z. Okay? So, we're measuring this and the way we... So, let's say we're at, you know, 1,000 times larger than the lattice-scale so that there's some effective continuous theory in the description. And what we get is some effective continuous theory in the description. And say lambda not. That's the function of the state. Actually, compute that would be vector. Actually, take this block spinning procedure through and so on. I am going to continue to use the theory in the description. Let's say 1,001 million less. Okay? It would be very tough. But there's something. What does that mean? It means that at scale lambda, parameters of effective action, I think we call them X, N, D. At scale lambda not. So, to sum X, X and D of J. We don't know what it is. It's function of one parameter. Is this correct? As we change J, we're drawing some line in the space of all these properties. As we change this line, one point on this line, it goes to phase 2. What does this mean? This means that unless there's a terrible conspiracy going on, this means that the space of flows have the property that there is a co-dimension 1 surface. Such that on the co-dimension 1 surface, all flows lead to the phase 1. Because the phase transition itself itself is, the continuum of phase transition itself is the phase 1. Okay? So your renormalization group flows lead to the phase 1 and a co-dimension 1 surface. Is this clear? Okay. Let's say, we're starting with this description of the renormalization group and then flowing down to much larger energies. What do we know? We know the neighborhood of the fixed point we've got almost masses there. Correlation functions are very large. I mean, oh, I've died of hope. At the fixed point, we've got masses. Scale is there. So when J is j-critical, we've got scale in there. So the description of that scale when you get A is the fixed point. So what does that mean? That once we have J equals j-critical, we run the renormalization group flow depending on the distances. We will have reached the fixed point. Is this clear? So there is a co-dimension 1 surface on the space of renormalization group flows which went flowing down to low energies at the fixed point. Now, let's try to relate this to what we know about fixed points. In the neighborhood of the fixed point, what gives you the co-dimension of the space in the space of flows that reaches the fixed point. But the only flows that don't reach the fixed point, the only directions that go away from the fixed point, the directions back and forth, is the renormalization group. So the co-dimension of the space of flows that will hit exactly the fixed point is the number of resonant operators of the fixed point. Is this clear? Co-dimension, sorry, this is string theorist talking, sorry. I hate these things that people use words they don't understand. Co-dimension of the space of flows. Suppose you are in a space which has dimension A. You've got a manifold that is dimension A. The co-dimension of the manifold is dimension A. So the co-dimension of a line on a plane is 1. The dimension of the line in the plane is also 1. So the co-dimension of a line in the 3-space is 2. The dimension of the line is 1. Now the co-dimension of a plane in the 3-space is 1. So co-dimension 1 surface is something that can be given by one equation. It's just one condition. Is this clear? Okay? Now why do I say that if I do a 1-parameter tuning? Yeah, let's explain this. Why do I say that if I do a 1-parameter tuning if I do a 1-parameter tuning let's say load at load angle, suppose we've got two lines on a plane generically they meet a point. On the other hand, there's two lines in 3-space generically they meet a point. On the other hand, if I had a line and a plane in 3-space generically they meet a point. You'll see in that there's a general rule that if you've got two surfaces subsurface, you've got a space of dimension. You've got two subsurface of dimension n1 and n2. Generically you would expect them to meet a point only if n1 plus n2 is equal to n2. Very concerned by one of these n1s is 1 because we've got a line. What was a line? A line was the line of initial conditions that you would mention. So we've got one line and some other surface and we know that these meet a point. Can you see it? It's just like 2 in 3 dimensions. This means that that surface should have dimension d-1. Now, in particular, the d here is infinity instead. I don't want to say infinity-1. But co-dimension is very fine. They can be, but it requires 2. Suppose I write down a random line and another random line. You do not. You shut your eyes without drawing on the line. You'll have to have an experience with that. Let's say there are 2 equations. Is this and your 2-dimensional space? Generically. Let's say there are 2 equations. How many equations do you need to specify in 3 dimensions? We need 2 equations. Can you see that? One equation could specify plane and another equation on that plane could specify line. One line is specified by 2 equations. The second line is specified by 2 equations. Total number of equations is 4. Total number of variables is 3. Typically 4 equations is 3 variables are not 3, which means the lines are not 3. In this kind of equation counting, we're telling that genetically, you expect to have an example. Is this clear? In these dimensions you require d-1 equations to specify line. But one equation to specify a coordinate equation was like this. So d-1 plus 1 is d. Generically, there are solutions. In the total space of rg flows, we can identify the z-fold, right? If it has it, most go off. So now I have seen that I expect the space of rg flows to be 2-dimensional. You explained to me that the coordinate equation is 2-dimensional. The space of rg flows is 2-dimensional. The space of rg flows is infinite dimension. Infinite flows? Yeah. There are two different things. There's k in the space of flows. k in the space of flows. b in the space of the dimensionality of the attachment. The surface that we started from in the human could be anything. This is infinite dimensions. Flows are labeled by the infinite dimensions. Infinite number of infinite dimensions. Okay? Now, suppose you get a fixed point such that all except one operator are here. Yes. You have to reach this region so that the relevant operator was 2-0. That's the point you... You have concluded if you instruct your individual operator then we don't have to do it. We have two random operators here. Actually, that was the point. But you see, at the moment all I said was that we know that the i is infinite. It's described by a fixed point with one random operator. Why do we know that? We know that because we have to do just one tuning to get the fixed point. Now, you could ask which of these fixed points of either could be the fixed point that describes the phase transition? Yes. Because this has two random operators. Could it be this? Yes. This is the only not fixed point in three-dimensional... in three-dimensional statements. We want the other operator. Therefore, the ising model of the phase transition point the ising model of the phase transition point presumably is governed by this. Now, I want to... Most of the lectures will be dedicated to the study of this model for the larger unit, where we can exactly solve it. But, before doing that, I wanted to go through some of the physics. The people who are static people, they all know what I'm saying. But somehow, I haven't taken it as something very uneducated about phase transition. So, I wanted to go through physics. So, I want to know as far as... You know, in the neighborhood of the phase of a phase transition, you get many interested in power fields. People who study phase transitions, it's very interesting to study phase transitions, often many of these are possible. And, it was in the 60s when the goals of the theoretical understanding of phase transitions to try to explain these power fields were funny, interesting powers. They were universal powers, in the form that I'm going to explain. So, to be clear, this was a problem, you know, that should be a main event with the evidence, because we're universal. But, it was not trivial, because powers weren't half-or-one. It was something interesting. It was a great chance of what to feel like. Where's the next link, these powers? I want to explain to you how I... Look at a blow-up of this, forget about it. Okay? It's not a good thing. And, so this is our fixed point, and the flows are going... This is the effective surface, okay? So if you stop it out here, you hit the fixed point. Your physics will please get... the indirect. So at this point, suppose we've got this j, this j, the value of j, so we've got some... line in the space of all initial conditions, that is a function of j. And j is the distance from the j critical. This line faces this attractive point in the space. At this point, as far as we compute the correlation length, what is the correlation length, you compute a 2-point coordinate, let's say, a 5-point coordinate. So the bar minus r by c. Sigma is called a correlation length. Length scale over which interactions we get. Okay? Suppose you've got this correlation length, it's some function of j. Now, we know that a j equals jc. A j equals j critical, z equals j critical. Question. Suppose j is equal to jc plus 10 times c. How does sigma behave as a function of values? That's a z couple. Actually, I'm a little challenged with Greek letters. Whenever I go to a Greek letter, they call it z. It goes really easy. That's what happens. Sir, is it dc of fixed-point length how it works? j is what you have in your model. The pattern is in your model. Now, as you change j, it's in the description of your model. Your correlation length. Okay, but when you're making j is jc plus delta jc or delta p is delta jc or delta jc of fixed-point length. I can say that. Thank you. Delta j. Sigma delta 10, do you want to know how it works? It's clear that it diverges as delta t goes to zero. But how does it work? There's a question we're going to try to answer. Can we understand what we're normalizing? It helps us understand the answer to the question. That's what I'm going to try to explain. You see, as we change this jc, this delta jc a little bit away from here, what we're doing in particular is changing. This flow is something complicated but once we follow it to the neighborhood of this region all the irrelevant stuff is done. Once you follow it to the neighborhood of this region all the irrelevant stuff is done. But the remaining flow is simply in this relation. Is this clear? So, we take this j we change j from jc by delta jc and what we want to know is that you wait for a moment wait till you read scale lambda. What we want to know is this. What we know in general is this. We know that so let's call this the coefficient of this this operator or whatever it is, that's called x2. In the dimension of this function. We want to know we know that x2 of lambda takes about x to zero some a times lambda by lambda naught to the power mu. I'm not using statin here to expand a notation because of a particular notation for this power which I don't remember but I will define our notation. In our notation mu is equal to the dimension of the relative that's right something has to get the dimension. This right. So, dimension 4 operator mentions this scale. Once we wait for a long enough time once lambda becomes much more sorry I got this wrong lambda naught. We know that it takes this form. So these a and d are going to be some function of delta j. We don't suspect that it's anything but linear because nothing funny is happening with this we know by itself. When delta j was zero it was zero. When delta j is now zero since there's no non-annuality of any sort. The generic expectation and you can see this in Delay's toy model that's something very special is happening. You would expect just by the logic of Taylor and that this a idea a idea was a linear function of delta j. So this is delta j time comes if you about this you can work out sample toy models to see that this is just delta j. It's just Taylor assumption. The most general analytic function that is zero at delta j equals zero will have to be and there's nothing non-annuality for the flow. This is interesting this is interesting for the formulas. You see what this tells us that if I started with two different values of delta j but I wanted suppose I wanted to get to the same value of x2 and then I have to change my lambda. So delta j happens at a particular lambda. The lambda not is the lattice. That's not in my answer. If I want the renormalization to be taken in the same theory the theory is defined by my x2. I have to change lambda. How do I have to change lambda? Lambda has this proportional to 1 over x2 to the power 3 minus 0. So 1 over lambda is proportional to delta j to the power 1 over 3 minus 0. It's going the right direction as we make delta j smaller and smaller lambda becomes smaller and smaller equals to power. What is that? That's reasonable. That's reasonable because you see if delta j was 0 we became tiny on this line. So you have to go for an usual amount of time to get to any non-zero extra. As we make delta j smaller and smaller we are near and near a fixed manifold. So we have to go for longer and longer to reach up to the next one. Okay. Now remember that our renormalization and two theories were the same. They were always the same but all units were measured in units of lambda. What we found is the problem that the scale that we have to move to that in particular is that zeta you see when we've got the same point in the renormalization root flow the two correlation lengths for instance in units of lambda. What we have is that zeta divided by lambda is some function of x2 down the scaling so that x2 is fixed. So in particular this means zeta so zeta times because zeta's length and lambda is zeta times lambda is some function of x2. So zeta is input some number we don't know what this function of x2 is but lambda will proportion to this. So zeta is proportion to delta j minus 1 by 3. Yeah, yeah. We conclude it as we change the parameters as we change the parameters of our model by a little bit away from the fixed point. The correlation length will scale like delta j. This and in particular this exponent is completely determined by the scaling dimension of the unique relevant properties. The exact scheme it's the in the renormalization root equations around the fixed point dG by dT is equal to something times that number. Aren't we taking all the corrections? Okay, this is the conceptual process. The brightness if you write dT by dT something proportionate to T That's something proportionate to T. No, nothing proportionate to T. Then the proportionate function depends on T area. No, no, no. That's a very important point. The renormalization root equations are lower. dG by dT at an exact scale it's a function only of G. There's no memory from. Once you know where you are, you know how to move. It's a very important structural feature of renormalization. We discussed it in one of the early lectures. dG by dT is a function of G. This is an exact statement of the exact infinite dimension of G. Excellent. We did this for correlations. By the way, you see that this is the address. Interested in computing people are interested in computing specific keys. Can you say that you will do exactly the same analysis? You just have to know what the dimensionality of that specific the quantity of interest in computing. You want to know what the dimensionality of the quantity of interest in computing. And that quantity scaled by lambda to the appropriate power the power given just by its 90th dimensionality. That quantity scaled by lambda to the appropriate power is fixed. Therefore, the quantity behaves like lambda to the appropriate power. But then, lambda is replaced by delta G to the power of renormalization. For example, for the correlation length which was an octave length of dimensionality. But we could do it for the specific heat. You have to work on the dimensionality of the specific heat. You work on the dimensionality of the same argument. And you get an answer. All of these quantities are controlled by lambda. That is the dimension of the the scaling dimension, the exact scaling dimension of the one grand of topography. No, it depends but only about the triviality. It depends on the dimensionality. Because this was a order length it scaled by 1 by lambda. Something that is dimension 2 was scaled by lambda squared. And then you replaced lambda squared. Now this covers a large range of observables. But there is one more observable that is typically mentioned. But this does not. And that observable involves two point functions or one point function. The magnetizations. As we take delta j a little above jc as we have seen the spin operator itself develops a magnetization. And once again you can ask how does the magnetization vary as a function of delta j? For this quantity what we are asking is what is the expectation of value of phi as a function of delta j? And in computing this quantity there is one further sample. One further sample that I have referred to but not stressed enough. And that is this. You see when we did the renormalization root flow. I talked about operators like m squared, phi squared operators like phi to the 4. But I never talked of the operator in delta pi. Can somebody remember I mentioned this briefly. Can somebody tell you why I never ever refer to that as an operator? It is because when you are doing a path intake you are always free to change the variable of the operator. You are always free to add successive steps in the renormalization root. Always free to rescale phi if you want. And this rescaling can always be done. So as to set one number to one. There is a free theory that the kinetic term controls all the physics. It is conventional to use the rescaling freedom. But with our renormalization root flows. To set the delta pi squared always to 1. We integrate our shell in phi squared. And then we rescale phi to get rid of it. So once we have it accounted for that quote on quote trivial fact. There is one less operator to the number of terms in the branch. That is why we always set it in phi squared. Now this is a trivial fact if you want to observe anything measurable. That does not involve phi. But of course if you want to observe something that involves phi then it is not trivial. It is the quantity that you have to rescale the when you approach a free spot. Just like any other operator delta pi squared also gets picks up a beta function. You can get rid of that beta function. But accompanying the integrating out with the change of variables of phi. I am forced to do a change of variables of phi such that phi is equal to lambda as function of lambda is equal to lambda times lambda amount somehow phi lambda times phi lambda as we in neighborhood of this point. Suppose this is the kind of rescale that we have. Okay. Let's call with that powers of name let's call this c. This rescale by the way is sometimes called weak function in the transition. I want to conclude. So suppose I do my rescaling and so on. What do I actually conclude with this kind of analysis for the expectation that you will find? What I will conclude is that two theories that are related by this lambda change. This lambda change. Can someone tell me? Lambda was what? Lambda was two theories that were related by this change in lambda. What do they have equal expectation values of? They have equal expectation values of phi lambda. It's not the same as phi 0. The original spin of your algebra from this relationship is that if this is there, what is getting there? Phi lambda naught is related to phi lambda. Phi lambda naught phi lambda to the powers of lambdas have the same expectation value. Okay. And therefore expectation value of phi lambda naught scale like this and then we substitute for delta. What are lambdas? So scale like 1 is a delta to the power c by 3 minus 0 and you will have a similar analysis for two point functions of spin operators. So independent of the insertion of the spin operator governed by one number namely the dimensionality of the repertoire. Objects are involved in addition of the insertion of the spin operator. A governed by two numbers. The dimensionality of the margin operator. Plus this spin is sometimes for the dimensionality of phi. This wave function. The extent to which you have to do this renormalization as you do the renormalization. It would be connected with how phi would the beta function for phi if you allowed an insertion for that we didn't do that because that's another way of saying it. The other way of saying it is that if we allowed an insertion for phi in the Lagrangian. We allowed an insertion for phi in the Lagrangian. We weren't doing that because we restricted to theories for phi goes to minus 1. But if we allowed an insertion for phi in the Lagrangian then the second element its dimension would be the 0. Now I have to say that these two values operators come from different footages. Because all our unifix points are the zero component of the second value. Because by symmetry the phi goes to minus 1. It's only something that appears in observables but not in describing the flow. So it's slightly different logic. No, it is what you actually do. You've got a analyzing model with the expectation value of the spins. Now the spins are phi of Lagrangian. Phi of Lagrangian is a theoretical device. Something that we've introduced to make our normalization go close to minus. But what you actually measure is the spins. We want to know what is the expectation value of the spins. You know if you have an experimental he's not doing the normalization. He's just measuring the spins. No, he's measuring the spins. You have to measure the change. What he is measuring is the variable phi of Lagrangian. It's true he's measuring that scale. There are two things happening. In the integral from zero to Lagrangian. But then there was also an insertion. What you wanted to do was if you do the minus s and some function you want to compute an expectation value of phi. This was the original to measure the expectation value of the spin. In doing this you did part of this path integral and as a theoretical device you introduced a new path integral variable which was phi of Lagrangian. Our normalization loop flow analysis told us the theory is with the same values of RG parameters. The RG parameter determines the expectation value of phi of Lagrangian. But that's not what we actually want to do. So in doing that variable change we will take up the rational factor. Do you understand? This is what we wanted to measure. We relate it to something that is simply determined by the RG parameter. And that's related to this factor. Is this clear? Very similar point for two point functions of phi. And phi naught, phi naught we do this with change. So it becomes two such factors. Then apart from this in scale. Why is it being stated? Why is it being stated? We say that in doing the RG flow when you do the integration of these these, apart from this number changing and this number changing this might also change. Now the analysis that we did we, you could do two things. You could either keep track of that. Not what we did. We didn't call that there is high-square relative torque. Our attitude was that two path integrals are the same. If they are related by change of values. So in order to fix that ambiguity in order to fix that ambiguity we put a condition. We are always finding our path in there. So that the del phi squared was one. But changing our values that change of variables here in the fifth point like everything else takes the form of a power. We don't know what the power is because we're just doing abstract analysis. But we assume that the power is used. Some number c. Which is related to the scale dimension of the operator phi itself. Is this clear? So c is comparable to 3 minus mu like this expectation of lambda doesn't blow up right? Yeah, it could blow up, blow down. So it essentially depends on both c and mu nothing. Yeah, it depends on sinus. Yeah. So now you see the way i predicted c would be negative. You see because this is the scaling dimension of the operator phi. In the free theory this quantity would in fact be minus 2 and a half. This phi is scaling dimension half in three dimensions. And c is 3 minus that quantity when predicted has lambda boundary. It's minus. So in this particular case where c would be negative. So it would not blow up. But that's what you expect. The magnetization is 0 at the critical point. It changes to something small as you go. Okay. So in fact let me let me give you better rotation. Let me write this as lambda naught by lambda to the 3 minus I call that this one mu 3 minus eta. 3 minus eta I call that eta. Okay. Okay. Then I will replace this by 3 minus eta over 3 minus eta. And now this eta and mu are very traditional for high metaphysis. Very familiar. Nu is the dimension of the operator phi square. And eta is the dimension of the operator phi. Nu is the dimension of the operator phi. You put an insertion of phi in the path. It would have a total beta. Okay. Nu would be the dimension of the operator phi. Okay. So the free theory in particular nu would be half and delta. Do you want to be half expert on the Lagrangian theory? What? Why would we have what is eta for the 2.4? What is eta for the 2.4? Oh. Suppose we were working on the scaling of phi phi immediately worth the time. You see, phi lambda phi lambda okay. S and 0. Okay. We will have some particular dependence on power. Okay. And then the rescaling will do too. It will change. This x will also change the phi. That's all. Well, let's not get the analysis of 2.4 functions now. That's us. We're going to do that systematic here. I advise the correlation function. That's not going to happen. I think that you get the idea. For observables, noninvolving explicit insertions of the spin, all critical exponents are going to be one number. Dimension of the spin. Okay. For observables involving insertions of the spin, all critical exponents are going to be two numbers. Okay. If you measure these two numbers in a calendar, then you have predictions. You have predictions for all these critical exponents. By the way, because all these different critical exponents are governed by just two numbers, independently of being able to compute these quantities, you can predict relationships with them. These relationships are famous relations and stagnant books, you know, that the various independently measured exponents will be various relations. They don't belong to my notation. Okay. They're not familiar. They're various different relations. These are, of course, experimentally known before. Before they were pretty... Very simple, very simple, explained by archival analysis. But the really impressive thing, as we will start to see is that theorems are also managed for the rising model to compute these exponents, not exactly in successive approximations. Okay. And a couple of results that we've made, with the critical exponents of measured systems. There's really a triumph for theorem. There's very abstract thinking of normalization of good flow led to the understanding of what you should compute. Just these two enormous dimensions. They're literally simple ones. I mean, once you know what you have to do, you can do it. And you do it, and then you go to the lab and it comes. It's really quite an amazing thing. Yes. Yes. Is there a question? I don't know. I don't know. Damages, damages. But not these dimensions, the difference between the dimension that you would have to create and the dimension you would have to achieve it. So you would have a term which is just 5 in the Lagrangian term in this case. Yes. But then you would have to explain the equation to the base point that j goes to jc because j would also have that. We have two parameters. So then it's more complicated. Different parameters. That's not what we do. I understand that. It will hold. You see, because there are we're doing two different things. There are observables and the theory. We're demanding that the theory always have the 5-0 to 5-0. Because our initials start in Lagrangian. No, that's why I said no. The Lagrangian has 5 terms. But I said Lagrangian, that's not how it works. That's fine. Because we've got j s i or s j which that's the symmetry of the s goes to my sense. That's the symmetry of what builds the fact. Lagrangian did not happen. But observables you choose to measure but then Lagrangian could violate the symmetry. Nobody could stop you from measuring this. No, that I understand. You see, if you think I did. Yes, then you have to do more complicated. Then the scaling analysis. Then this scaling would not work. Then you would have to independently try to understand. It's less powerful. Because you wouldn't know. You would have to do a 2-parameter set. Sir, would that be the relation from the line you would have to come up with in that case? Yeah, you would have to choose 2-parameters. This delta j would be a delta j1 delta j2. And you wouldn't know how delta j1 and delta j2 caught us and then you have to You see, what do we do? We've got some delta j and though we cannot solve the two-flow equations in detail, we it's totally plausible that the deviation that the parameter that governs the relevant operator was linear in this delta j. Now you have 2-parameters. Some x1 and x2. So some a1 and a2 behind these flows. Are you understanding what I'm saying? And you would have 2-parameters. And you cannot use a general argument. You know that a1 was alpha times delta j1 plus beta times delta j1. And a2 was gamma times delta j1 plus delta times delta j2. But you wouldn't know of these alpha, beta, gamma, delta. So there would be no parameters then in your analysis. So less satisfactory. The really powerful straightforward result come about in the case where you have just a single. You can see 2-parameters. But it's not as set. Other questions? Okay. So now, one more general thing. One more general thing that I wanted to discuss before going on to this larger calculation that I would present. And the general thing was you know there's always a template between what ionic ethers do and that's what a lot of people do. Whether you're looking at renormalization who flows down there or up there. There's always an interesting confusing problem. And this relation that I talked about how delta j changes so can be worded in high energy that I want to do for you. I'll do that and I'll write the relationship more seriously. Okay. So as we have said and we have said if you had some sort of RG flow like this in this particular situation suppose we stop it at scale and suppose we had x2 at scale now. Now suppose we're doing what we started with in the spiritual scale. Suppose we constructed at scale now. Now we're not doing the ionic ethers. We're trying to construct the water field. So we're trying to construct the monoprene theory by starting out with a sample that I'm going to form 10 by squared plus m naught squared by squared plus plus g by 2. Okay. And we start out with this with m naught and g naught at scale. Okay. As always, we always non-dimensionalize every quantity with scale of interest in the scale now. So m naught squared by lambda naught squared by x2 and g naught now we're in three dimensions. Okay. So we're in three dimensions. So g naught by lambda this has dimension 2. So this g naught is the dimension 1. It's equal to 2 and 4, just press 2. 2, 5. So what we always expect is that we're going to show this renormalization loop flow that goes to this fixed point. And that involves 1, 2. Okay. So in order to get to this fixed point as we will see, there's a sense in which we'll have to take this g naught loop. Any, any, any other, this g naught will be a real and irreverent operator. You see this, right? You already know what it is. Because g naught flows like this it's an irreverent operator. What? What is the h? In what dimension? In dimension 3. Yes. Yes. In dimension 3. What? Then g naught is sorry, I shouldn't say irreverent. We know that final, that this g naught is an irreverent operator. G naught looks irreverent but that's about the frequency. About interacting but we're going to see the base of frequency. G naught will cannot be we already know this. This is a picture. The g naught flows in and not flows out. You can't say g naught is an irreverent operator it's something similar to reaction. Yeah, yeah. The same values are there. What all I should say is that I only need to start with the one parameter set of Lagrangian's name. So when I start with this for any given g naught which is set to some x or not I heard it fixed, doesn't matter what. Okay. Then if I take x2 and do an x2 as a function of Lagrangian's that should help me define my own quantum field. That's all that I'm able to say. Okay. Now what I'm going to try to understand is how the tuning I have to do for x2 as a function of Lagrangian's is related to the dimension of this one relevant operator. Okay. Of the dimension of the one relevant operator out of there. Analysis here is completely analogous to the one we need to condense in the condense language. It's the same as this, but I just do it again. So then you see it's answering this question. Have you understood what the question is? The question is in order to define a quantum field theory what I do is start with some UV Lagrangian of this one with x2 chosen as a function of Lagrangian's. And then I take lambda not to infinity. How am I going to have to take x2 not as a function of lambda not to infinity? In order to land up in the same point is this clear? As we have seen before as we have seen before if one is x2 as a function of quantity actually x2 is a function of lambda which is something particular. Suppose we say that x2 as a function of lambda is equal to let's suppose this is our renormalization. x2 vary as a function of lambda vary as a function of scale under the renormalization rule. We know that, right? We know that x2 in the neighborhood of x-point for lambdas that are such that you reach the x-point is equal to some alpha let's call this lambda i. Some alpha times lambda not lambda iR by lambda not to the power 3 minus beta. And then since we know that we want a function of lambda iR to be a going to the iR what? It's the dimension of the operator 5 squared we call that this scale is here to be lambda not. Suppose we choose to this scale there will be some number which is much larger than lambda iR but still not so large that we will outside the scaling of the x-point. So let's call it lambda so what we need what we get here is that x2 of lambda intermediate x2 of lambda intermediate is equal to a lambda iR by lambda intermediate to the power 3 minus this is x2 of lambda iR lambda is some all this lambda renormalization we fixed a condition so that lambda at x2 at some scale lambda renormalization is 8. This is x2 at an arbitrary scale lambda doesn't matter less bigger as long as we are in the neighborhood of this point it's just how x2 makes this function is this clear? I shouldn't have used that so we see that x2 intermediate okay x2 intermediate behaves like this as a function of lambda iR now you see to be in the neighborhood of the fixed point at all there was some x0 so now we go to our scale lambda there was some x2 lambda not critical which would have set us to go back to the fixed point so that we would have got x2 equal to 0 if we scale it now just like the analysis we did before the deviation of x2 from x2 lambda not critical whatever that is the deviation of x2 from x2 lambda not critical should be linearly related to this x2 intermediate iR already that's right it's the same as the lambda renormalization lambda not is the UV scale lambda renormalization is the scale at which you are putting a condition you are defining your theory by demanding that x2 and rx2 at lambda renormalization is equal to 0 okay basically because of the linear relationship okay you can just take this guy here so what I am saying is that what we should expect what we should expect is that x2 at lambda not should scale at lambda not like 1 over lambda not that presents in delta x2 x2 at lambda not minus the x2 at which you are going to reach the critical okay in the case that this mu here was 1 so that we went dealing with just the free theory this is simply the statement that if you want to fix mass in the in the iR you will fix your mass in the UV why? because this delta x2 would scale like 1 by lambda not this point okay okay this is just the x2 right you are just fixing lambda yeah this is what it is some very small quantity and this is linearly related of x2 this x2 here is the variation from this fixed point there is some flow that will take you back to the fixed point that flow may be x2 not equal to 0 it may be x2 not equal to something else okay so the deviation from here should be proportional to the parameter that tells you how your flow line deviates from the critical flow line where? this is an extra intermediate but it is always a different way to flow line I just move all of it to or none of it no but you see in the neighborhood of this fixed point the critical flow line is extreme what? no okay this lambda intermediate is much larger than lambda we don't realize but still so that you are always so that you are working it is this point on the left no no no that is lambda not yeah so you are replacing that I am just saying this is a linear relation I am saying that see if you work here if you work here you will get if you work here you will get the relationship back okay but then there is a linear relationship between see if you want I could write this as that text but it won't make a difference because in the flow in the neighborhood up here the critical guy has extremely busy okay of course using lambda intermediate is such that we have a bigger point yes I mean how do you interpret to get to lambda not yeah this is a vicious step right the assumption here is that there is no non-analyticity in that dimension good it's a good question it was the same question that you could have asked for the next question okay you see for a long time we have a flow here but then we give it the question is that suppose you know there is some parameter here that describes the deviation of this flow lying from this kind where do you hit here proportional to this deviation right you are governed by the physics of the fixed point right and that's this okay so this long time intermediate was really my thing here you see where we first start leaving making extra lambda not is proportional to delta extra lambda extra lambda intermediate and from there it is exactly are you understand is this clear so so this delta x2 at lambda not scaling by 1 by lambda not square is the behavior of the free theory which just tells you that delta mass squared is equal to constant okay because x2 was equal to mass squared divided by lambda not x2 lambda not okay so in the free theory this just tells you that you know you don't have to that as you change lambda not in the free theory with appropriate regularization even you won't even have to worry about the delta okay you just hope the mass fixed okay that's this funny scale okay however if you have to scale this mass with some power then you can by reading of the power read off the scaling dimension of the mass squared is this clear so now if we are given half an hour today now we are going to leave generalities and start doing a particular calculation okay and that particular calculation is a calculation in in 5 to the 4th in large n 5 to the 4th okay so what I am going to do now is an exercise that's great fun it's great fun because all these abstract all these abstract things can be illustrated very simply or some of these abstract things can be illustrated very simply in a place where the calculations are most true because there's a theorist's trick the theorist's trick is the following given a problem that you cannot solve deform into a problem that you can solve that retains some of the features of the problem that you cannot solve and you know theorists really like to do it this is that related to the problem that you cannot solve by a path a parameter large in the problem you cannot solve a parameter small is a problem you can't solve you can solve it small and expand around and hope one day to go from here okay so it's not like it's just a different problem it's a one parameter set of problems deformable deformable to the parameter that you wish so the problem I want to solve is to find the new fix find understand the physics the new fix find and I cannot solve the problem because in general it's going to be a problem okay one way of addressing this is to work in the minus section of dimensions of all the sections of dimensions and that's something we will probably get to the serious analysis but there's another way there's another way of deforming this that's very easy it's not simple so the fact that to do this well we can find the specific different form and different dimensions is it related to where dimension is worth or is that just no but I mean dimensional regularization has two major differences form minus epsilon dimensions where you take the dimension on seriously to get this solution that's a very good user but the second use that's also very important even when you don't take epsilon 6 suppose you're just dividing the four dimensions form minus dimensional regularization is very good regularization scheme just to preserve gauging dimensions in a four dimension so there are two distinct uses one is related to the preserved dimensions and we have a way of interpolating between dimensions and these two distinct uses are both true and sometimes one is useful, sometimes the other is is this clear so the theory I wanted to solve was this one before I start with the value of dimension m0 squared y squared plus g0 times 5 to 4 this is the theory I wanted to solve again in three dimensions this is the theory that's very tough to solve exactly now there is a nice definition of the theory which does this suppose you introduce instead of one scalar field you introduce n scalar fields okay let's call them 5 you know with our Lagrangian we have the symmetry of phi dot to minus phi we look at the Lagrangian that will have symmetry under phi i prime is equal to lambda ij by j lambda ij is an arbitrary o n matrix preserving this invariance there's not much else at it basically the Lagrangian matrix if it was one field was tough the theory of two fields would be tougher the theory of n fields looks like it looks like a crazy but you know this wonderful thing happens that often happens in physics sometimes we only become simpler in learning than it is in one degree and that's what's going to happen in this case while this theory is hard to solve in a finite way in the large element it's exactly the same before we... so what is the goal? the goal is to compute this path integral that's what some sources when we actually start computing the path integral let's do some rough explanation see any path integral like this is a competition between two terms the first of which I'm going to call energy when we use this term and the second of which I'm going to call energy is the value of this ket of energy entropy is the volume over which you have to do the interaction more precisely entropy is the log over which we have to do the integration because we're comparing the energies of the experiment now look suppose we've got the energy of our theory much much larger than the effective volume over which we want to do the integration then there will be an obvious approximation for this integral that is the integral will be highly peaked about the configuration that minimizes the energy because even a small difference in energy is not going to happen because the gain you get is small compared to the loss because you're going away from the gain on the other hand suppose the volume down was much much larger than differences in energy then your integral will not be highly peaked around any one configuration in fact sample all the invigilations equally equally well the first limit is what's often called weak and you don't have to do an integral these are some classic emotions the second limit is a little bit stronger a well polished constant theory somewhere between these two limits because the first is trivial and the second is often redefined somewhere in between these two limits there is a limit of the theory we want to do it in a way so the energy energy can be up at least try to estimate like we try to give a crude estimate of how the energy turns in a large scale the crude estimate is that since each of these this is a question about the sum of the angles it looks like the energy turns similarly here but it looks like this energy turns in a scale like n squared which is a product of two terms each of which is sum by n for this reason in order to get a good large n limit what I am going to do is as I take n to infinity I will multiply the power into my g0 by n and all g0 fixed as I take n to infinity how do you get weaker than n level for a quantity like this okay what does it mean for a quantity that would be a strongly coming a weakening it means that you can do perturbation theory around a particular energy there is a dominant configuration there will be a classical configuration and for deviations there will be a classical configuration small that's what happens when the energy turns because you just find the other energy configuration is the dominant g is very small the energy is almost zero from a traditional it becomes stronger because the major dominates okay what is that it's like the a group leader then came over what's happening and he said like perturbation theory will work in sometimes the same the same actions no perturbation theory you see perturbation theory is controlled by how much you fluctuate around the classical point the determinant gets contribution to fluctuations for the determinant to be small compared to the classical piece you should fluctuate very much okay that was a strong coupling because you just you take this truly integral because you very often cascade okay so we need to take this large end limit taking this g dot taking it with respect to so that this we will see as we solve the theory 5 because this term is 90 and 5 this 5i 5i some of it times another 5 this is 5 we want this whole thing to escape so we divide there is no respect it will be negative yeah it will be some like this g dot forget this just to leave to something that some people already the integration suppose it turns out that dynamically each 5 field runs over some value a then what is the volume of the integration it's order a to the power n so e to the power n log a this is the end this is the entropy the entropy is also scaling this looks along like a way to accept this problem as given to us and try to solve it suppose that dynamics is going to somehow restrict for each 5 field to some finite range okay we see how this happens some fixed number let's say just because there are any 5 fields the net volume of the integration is a to the power n therefore a to the power n which is e to the power n log a gives you an entropy of it the measure of the entropy is simply the number of fields that you have in the integration okay more than the scale like n so it looks like we have a chance of getting a well posed problem once I have scaled my g in this one so we can take this as our definition of the problem and try to solve it one to the computer was a correlation function so it's matrices of this model so to start with let's just study the partition function itself let's just study this a little bit it varies a little bit let me analyze how to study the partition function to include the insertions okay this quantity is a difficult problem because of the interaction without this interaction we had a free field okay and now there is a trick to get rid of this interaction but some people sometimes want to hovers the panel which check okay so there is some project which is the follows write this as d phi i d sigma d phi i d sigma exponential of minus del phi i just because there was n terms and now what I am going to do is the following I will write this as plus sigma phi i phi i and I suppose minus sigma squared by sigma squared by n sigma squared by 2 of source independent problem okay in your mind you should say that's a plus j i it was a source independent term j independent term if we integrate it out of the sigma field that's the solution why? see the sigma equation of motion is simply because sigma is a quadratic field and moreover sigma is a quadratic field of derivatives okay it's simply minimizing sigma at each point okay how do we do that minimization of sigma at each point where we can write this here as minus n by 2 g naught and I use sigma minus phi i phi i by n quadratic squared phi i phi i squared g naught I think this plus this the sigma equation of motion is very simple it's simply sigma as equal to this doing about the table set sigma equals to some determinant of 8 by g naught that's how it will go we get the sigma is set equal to this what remains is this and this is what what we shall do sigma equals to this and we have integral over sigma for tau c integral doing the integral over sigma what? that's true but we don't even need to add tau c integral to this are solved I think that field is equal to the classical no no no d x d x squared d x c up to the determinant is the same thing as setting x equal to its classical this is an exact statement no no logic no because you see if you have a now that I take my theory written in this language doing the path integral over phi is trivial because phi is number if you have some determinant there are n phi's the path integral over each of these n phi's is identical and therefore you get some determinant if I exponentially at the determinant I get n times something equal to this was the statement that the whole n property integral over phi is a ordering that we get n times something that something will be some function of sigma plus there's an explicit function of sigma here not this that something will be a function of sigma not this this is also going to be a function of sigma this is also going to be a function of sigma so what we are going to get at the end is after we do that phi integral is some path integral over sigma where the full action for sigma is multiplied by n now you see we are in a situation where entropy and n and energy don't compete because now energy is n it's an explicit n but the remaining path integral left is just the path integral over sigma n integral over sigma which is one path actually entropy is one so in this situation now we can forget about the n integral and just minimize the action so we have just manipulated the interaction term to write it as a non-interactive it's very important it's not really non-interactive because there are two fields it's non-interactive as far as phi is concerned it's a non-interactive very interactive in the presence of a path integral and so now we have used what we are doing statistical mechanics like z is z1 to the power and for now it's exactly as far as phi integral is concerned it's just exact this interaction of sigma and all why do we need to make these n fields for that we can do it for anything but for what I am saying so far everything I said is exact but for what I am saying now it is important for what I just said what I said was that now in terms of sigma we have got an n outside the overall action and for this integral can be done by sanity points leading order in large n which has to extremize the action with respect to sigma we don't have to do a path integral in time as a function of sigma we only have to solve sigma as a classical equation of motion for sigma but where we use for sigma the full effective action for sigma full action that we get by integrating all phi including the term coming from the measure of the phi now I said all this before writing an equation because it's plausible that when you minimize the sigma the sigma field will be translated so what's the logic? the logic is that we should be doing this path integral over phi for all possible sigma fields and then choosing the sigma field that minimizes the section but you know it's often the case that you know that the vacuum is translated in there so it's plausible that in the vacuum sigma will be translated in there if that's the case actually there will be a path integral over phi we can set it to be translated in there just start making this assumption we lose nothing you see it's very important to work if you're doing a path integral in X X1 X2 so what I'm going to do is to assume that the extremum of sigma will happen where sigma is independent of X1 X2 if that's the case there's already here in this path integral I can set sigma to be equal to a number because of a procedure we're not going to be integrating over different field integrations and then the whole procedure I talked to you about becomes very practical because you see sigma just shifts here and then the determinant for these 5 fields is very simple to compute this doing the path integral it's just computing this determinant and can you see this this just gives you half n integral between p over p squared by the cube p squared so the whole contribution to the minus the net effective action is effective for this plus because this is a number multiply 5 by 5 this is also a number this is a number because we're assuming this translation for an arbitrary sigma field you look at it this is why it ignores 5i,5 we have one second we know that sigma is 5i,5 putting sigma over translation if sigma constant is not exactly we can compute I know is 5i,5i this is 5i,5i as a function of x 5i of x 5i of x and because this is a sum of this quantity here you see variation value of sigma which is what this thing will compute even if you thought of it as 5i,5i,x you would expect to be independent of x because the whole is translation in there so no I don't think there is any approximation you see one approximation that may come about what is true if you were to try to include higher order corrections from 1 over x from 1 over x we would now need to do a positive playground over sigma with this action and then you would need to know the action from things that go beyond constant to 1 over x that's right can you say that the final thing is d sigma e to the power minus x where s effective evaluate on constant configuration of sigma ok can everyone say that the path in jacquard over the 5i gives you that n by 2 blah blah blah which is evaluating the determinant is this clear now in the reading law in an approximation what we have to do you see what we call as some effective action for sigma the effective action is for arbitrary sigma ok really that that's like this would just say that this action was the same as this action once we integrate that's it let me say that out ok what we want to do is to integrate out the 5i there is an exact statement here and the exact statement is that we will get that the effective is equal to n by 2 log determinant of the operator l squared plus m squared plus sigma ok minus n minus sigma squared right this is the exact statement no assumption this is an exact statement and so if you take the path integral over s effective where this is happening may know how approximation works through all n ok but this is a composite quantity if I actually want to take this and expand it out that's a function of sigma it's a difficult quantity to deal with you see what is this quantity this quantity can be dramatically expanded as this plus this plus and these are the very sigma instructions expanding the log s log this is the economic sigma log plus sigma times 1 over this this is the operator plus sigma squared times 2 over this each of these sigma carries a momentum and you actually compute this for an arbitrary sigma phi you go on to express this is a difficult thing while this expression is homily it's not very useful however worth sigma to be a constant that is very simple because this is just a shift of the mass now I find I tell you it's not sufficient to know what it is for sigma equals constant because we have to integrate all the values it's a sigma phi but at large end only that value of the sigma phi contributes which extremizes the action because whole action is proportional to n so if the value that extremizes the action is translational invariable let me only care about this action of translational invariable points so make an assumption the assumption is that the saddle point at this field the dominant saddle point is translational invariable it's an assumption it's a reasonable assumption don't expect a 0 mod of sigma plus multiplication of 0 mod okay this is the whole action yeah and till not put sigma constant yes this is exact yeah okay I would like to try to expand sigma out of the 0 mod use an expandable out of the 0 mod exactly exactly yeah it's true yeah yeah it's true that all of these things on the second line but the point is that we don't need to do an path integral over sigma because of the overall in there the full path integral over sigma is dominated by that configuration which extremizes the action do we need constant sigma it need not be constant sigma but then that would be the statement that this field theory breaks translational invariance and its action okay if we assume that it's not the case it's unusual right it's unusual for a field theory to have no translation in there so there's an assumption here if n was not large you had to do a path integral the dominant configuration could have a translation invariance but you have to integrate over all the configurations of sigma including non-translational invariance say something very simple let me say it suppose I have a minus and I'm doing this integral dx I cannot do this integral without knowing f at all ends in the long-term I only need to know f in the neighborhood of this what do you think is that bothering me when you write in that is the mean field theory but no that is not exact but what we have done here is very careful it's true we're working in a place where mean field of something is exact but it's not mean field of the fight field it would have been very wrong to try to do mean field theory of the fighting that is just to extremize the action of the fight field because that would have been you know the major contribution which was also ordinary what we have done is to do that mean field theory correctly in a place where it's justified mean field for sigma is exact and justified by the larger mean field for fight would not have been why mean field for fight would not have been exact so now the action is on end entropy is also actually on end it's wrong to throw one end in favor of the other but now we've done little clever we've explicitly obtained with the entropy that's all done in the remaining integral is only one of one I should soon stop we want to complete our analysis but almost it's actually basically now how do I what translation in English that use sigma I find the equation of motion to sigma I extremize the action so I get the equation m by 2 integral p squared plus m squared plus sigma minus m sigma by p0 n is over all this equation is sometimes called the gap equation okay and this equation I have this equation that logically speaking determines sigma okay the logic is we determine sigma in terms of a pair parameter m0 and j so finding the sequence of motion is equivalent to integrating of sigma it's equivalent to integrating of sigma in the larger mean field because it's propagator and we showed that we did and the propagator of the 5 fields is simply one of the p squared by m0 squared plus sigma so then I got to show you another way of defining the same equation without ever introducing sigma just doing diagramatics oh fuck which will help you understand the equation suppose we never introduce sigma field we just did diagramatics and I wanted to compute the problem so I'm drawing final diagrams is which final diagram is the question it's the question because if you remember the interaction term was g0 by n pi i pi i is the whole thing squared by 2 this interaction term explicitly has an actor of 1 by n therefore only those final diagrams that belong to you in which the 1 by n is explicitly gets by an explicit factor of n how would you get 1 by n times by an explicit factor of n look at this diagram suppose I have i pi j j this diagram involves a loop of the j field this diagram involves a loop of the j field okay but it's a loop of the j field which value of j is a hundred term so you get an explicit factor of n in this diagram that may cancel the 1 by n so this is the diagram that may contribute to the logic on the other hand let us look at the diagram that we will not have let us look at for instance this diagram this is i let's say this was the index i this was j i is fixed, this health fixed this is the fixed total length measure it to propagate the direction okay each of these vertices was j not i j not the number of j's that we sum over which is 1 so we get the explicit factor of n so this diagram is just an energy contribution that's proposed to the n by n square which is 1 over n which we are not going to keep in the logic so you see without using this funny sigma okay I think we see that there is this logic given as some simplification if we were to keep those diagrams that contribute to the logic each time we introduce a vertex meaning an additional index otherwise it does not work a failure here was that we introduced two vertices but only one whereas here we introduce one vertex and one index you understand that I mean my index root summation of y is now if you think about it you would easily convince yourself that the only diagrams that contribute are cactus diagrams you can see by talking about these cactus diagrams nothing else with each of these diagrams obviously contribute each of these diagrams obviously contribute because every time I put a new index and you convince yourself that there is no other diagram cactus is just a go on right you couldn't get a cactus diagram like this that's not an accent really okay it's because the number of vertices 1, 2, 3, 4 and the number of intersections is 3 okay if it's a go on that's fine 1, 2, 3, 4 this is going to meet just like trees this diagram has a recursive structure the recursive structure is so bad look suppose what grows any branch of a cactus is itself a cactus suppose we write the exact self energy which is the sum of all these cactus we give the following notation to denote the exact sum this is equal to this class this class this so all these cactus graphs are denoted by this this I will call as sigma the sum of all these cactus graphs I'll call as sigma sigma exactly what we're going to get is that the exact propagator will be p squared plus m squared plus sigma clearly because exact propagators bear propagator for self energy and bear quadratic term for self energy that's my definition but can you see now so let's this exact propagator can you see that we get the equation sigma the one loop graph the exact one each of these cactus insertions each of the cactus graphs is simply a cactus itself growing off the original one can go off at many points but that's a geometrical progression summation the same and therefore we get the equation but this we know in terms of sigma this we know in terms of sigma this was sorry I said sigma was equal to this is simply equal to d3t by 2 by q again one over p squared plus m squared what? now I missed out some factors plus times t and maybe if I got a little bit carefully I would have worked it out better okay I have to do that carefully that's the working factors let's do that but you say that this is the same equation but capital sigma is the same as this is this clear that equation you're getting by differentiating this equation where in the denominator this equation the next one this way getting by differentiating the action wouldn't I guess where would that be differentiating with respect to sigma in which term the first is a integral but this is a log yeah this is technically this is not sorry this is just a log of this aspect and I have some of the right equations are exactly the same okay let's get the factor of half right okay I'll let you just but let's get the factor of half now the factor of half I can write down what my original I started with g0 to 5 4 by 2 right yeah I started with by 2 so I interact with the grand general by by 2A 5I 5I one second because you wrote here but the grand general has to be 2 I should have the grand general is that a matter the other grand general which only came with the 5 I don't know I mean if I get one square here if I do here that would yeah just just let me see if I let me just so suppose I bring this guy down I get half then there's a choice if I do I'll try that distance that looks like the half you see I'll try again now you're right what effect is there okay can I I'll leave it as an exercise so you should clear out the factor no it's because I'm writing the sum of all these things so this quantity is same so in the GP it gives you this plus this what is the total value the total propagator is this this is equal to the self energy you see this is the exact propagator and the one loop with the exact propagator produces the same thing each of these is one loop with the exact propagator this you agree now I want you to look at what this whole thing is this whole thing is a loop made out of something what is the loop made out of cactus growing here or cactus growing here each of these calculators propagator to public list it's this it's the exact propagator exact propagator is one of these pairs that's it because you agree that any one of these is the same but this is the same thing as one loop with the exact and this is the exact propagator do you see that this is exactly the same as this on which you built the arbitrary number of cactus it's the recursive nature of the example you see this what is this? this guy who agreed was sigma what is it? it's one cactus on top of which you can grow as many cactus on top of which you can grow as many cactus what is a cactus? that's the fact a cactus is a loop on top of which grow an arbitrary number of cactus do you have a computer program here? palentope n if left is equal to right and the reduced value is palentope you know what is this graph sigma is one loop is an arbitrary cactus what is a cactus? it's a loop on top of which you can grow an arbitrary number of cactus and the sum of all the cactus that is the cactus itself this one loop plus arbitrary number of cactus is cactus yes you see you see now this is the propagator now what I am saying is now if I take this and join it up this is one loop with the exact propagator by definition but can you see that this is just a cactus and therefore this is sigma but that's one loop with the exact propagator ok so we've got a diagrammatic representation of that same equation that also makes clear the interpretation of that little sigma that little sigma of course in terms of little sigma theory there is expectation value of little sigma but in terms of phi theory we don't care about little sigma for our convenience what what it's a shift it's a same energy ok and this equation here the cat equation from this point we are just simply summing up 500 so this is our first equation now what we are going to try to do is to do renormalization probably not different what we are going to try to do is do renormalization in this here we are going to do the spin of renormalization we have to choose g0 squared g0 and m0 as a function of lambda so as to hold two physical things fixed there are two great physical things you can hold fixed ok this is how you actually implement this one you don't actually compute some renormalization to sum up scale lambda you instead hold it to physical one physical quantity is the pole in the propagator one pole in the propagator m0 squared is just same so we are going to hold that fixed so we will have to solve this equation subject to the condition that s is equal to m-fix and is n-fixed that is one thing there the second thing we are going to do that's one condition of two variables which is m0 and g0 now in defining the theory about the new fix point we are actually here in this condition because the second variable is the wrong one but so far we have in what we've done we haven't committed to whether we can define the theory about the free fix point in which case we can express by two things or the new fix point in which we can express by one thing in fact it will be most satisfying to define it about the free fix point and see the new fix point emerge as an RG flow which is what we did ok so we are going to do the second condition so that we can express by this g0 you mean three moments very very quickly and in the second condition to something else I can compute I've computed one physical to follow the problem ok I hope that fits subject to the situation because I can compute exactly that's a great thing exactly one over there no perturbation theory is g0 I'm going to compute one more thing the other thing I'm going to compute is five to the fourth scattering no we haven't studied scattering in this class but you guys have done quantified theory quantified theory one of the studio analysis is the sun scattering so you know all about scattering ok so in this theory what we want is five to the fourth that can't compute five to the fourth scattering could be only one of those can't compute in the logic ok let's say we are scattering i i j g and escape this is one over n so maybe it doesn't compute but we have to be careful because three levels scattering in this theory is this which is also one but we keep all the graphs that are ordered one over n because they are the same as three levels anything that further suppresses then we will know suppose we are going to graph this ok here now we have i i j j and then there are two other elements that's a k k l n so two index loops but one two three four vertices one over n squared as you can we can actually do this using the same method but it's sort of interesting and easy to see how many graphs are contributed yes because how many vertices one two three four five six how many loops one eight n one two three four five one over n they contribute I don't say that no one didn't go so now this is very simple because momentum p1 and momentum p2 is 3m momentum p4 is 3m to p4 let's define p1 plus p2 is equal to p then what is this graph this graph it's just n iterations put the same basic so if we had e3 3 divided by 2 5 the whole thing is q same graph we don't have to communicate with each other one over r squared plus m plus m3 squared let me say somewhere that m not square plus sigma squared this oh when I said only these graphs contribute everything because clearly cactus graphs build to top this also but that's taken a graph by replacing these graph like the exact property so it's a large number of graphs there will be some sort of issue so that's why we put plus m3 this doesn't have to be this is p plus r m squared plus m squared m2 so this whole thing i of p m for what I want to say at the moment for what I want to say the only important thing is that this i of p m is uv and i are 5 this is d3p divided by p to the 4 the same graph gave us that somewhere 3n the same graph gave us beta function 4n beta function get to the log level 2 which I won't try to get straight now the first category I'm going to leave just this it's just geometrical so what we get is that the s matrix equal to g0 1 by n g0 i of it's an incredibly simple principle exactly the s matrix of order 2 to 2 okay this is something that we can try to evaluate now the thing that I wanted to say at the moment is that whatever the integral is no one knows uv i at the moment so the two quantities that we're going to have are let's say the s matrix at some prescribed moment because g0 would scale without an arc without an arc that's fine so we might as well say that we're only g0 fixed since this is the form of the s matrix given by one parameter we hold this value of g0 is physical g0 g0 is just the physical physical and then we have this other equation then we have to adjust then we have this other equation then we've got this other equation that we've got to adjust in order to get the in order to get a physical mass okay we'll see we'll see in the next class how we do that how that allows us to build both the free fixed point which is no greater achievement but also the critical fixed point and understanding the scale in that range okay