 Well, thank you for having me and for the warm introduction. It's a very nice workshop. I have to compliment the way it's organized. This is really great. I wish the light cone meetings were done this way rather than packing in huge numbers of talks. Speaking of which, here is a URL for the next light cone meeting at Jefferson Lab in Virginia, which is not nearly as cold as Minnesota. So you shouldn't be afraid to go there. So check out that site if you're interested. What I'm going to talk about today is to try to touch on some things associated with the main idea for light front type calculations, which is to solve QCD in a way that's complementary to the way the lattice people do it. And the idea is to be able to get at wave functions and be able to stay within Mankowski space. So in order to be able to do that, we really usually think in terms of fox-based expansions for solving the problem, solving the Hamiltonian problem. And there are various issues that come up in trying to do that. This is a review article that I wrote a couple years ago that for any of the details, working on a board talk, I'm not going to be throwing lots of details at you really fast. Instead, I'll just be talking about things in general. The few details I wanted to be able to point out, I've already put on the board here. But the key issues that we have to encounter in trying to attack QCD are to do with gauge fixing. And a typical situation in light front, they choose light cone gauge. Sophia and I are considered heretical within the light cone community. So we prefer to do things in terms of a covariant gauge. And we're able to do that because of a particular choice of regularization that we make. That, of course, is just a general issue when you're trying to do gauge theories in 3 plus 1. There's also just the issue of numerics, how to do numerical calculations. I wanted to talk a little bit about that since the bulk of my work over the years has really been focused on numerical calculations. And for many people, they tend to focus on that as being the key issue. But that isn't what's really holding things back as far as being able to attack QCD. The time will come when that's an issue. And there are certainly methods brought to bear. But these other issues are actually more important as far as making true progress. Another issue that's already come up at this workshop several times is this notion of fox-based truncation and what that does to your calculation. And of course, it has some nasty effects associated with regularization and associated with the gauge. And I'll talk about the approach that we have in mind for handling that. Another open issue that we're just now looking at in the last couple of years is to focus more on what you do about the vacuum. On the light front, the vacuum is trivial. But that's actually a very naive statement. Where do zero modes sit in the calculation? How do you interpret vacuum effects? And that's what has brought us back to 5.4. And I've done calculations in 5.4 over the years with various intentions. But we're once again back to that. And the very nice work that's been done now in equal time gives us an even better target as trying to understand where's the physics on the light front compared to what you see in equal time. And that will be a big focus of the next talk that Sophia will give. But I'm going to say a few things about that as well if time permits. We'll see how that goes. So to look at how we're thinking in terms of resolving each of these issues, as far as the gauge fixing is concerned, that for us is tangled up with the issue of how we're going to regulate the theory. We're talking about doing regulating a non-perturbative theory. And that means that something like dimensional regularization doesn't work because you have to go in there with every integration that's being done and tweak it. But in a non-perturbative calculation, that sort of thing is just buried within your matrix diagonalization process. And so it's not something that is very readily done. We have to look at it from some different perspective. And what we've been doing, although I've flirted a bit with doing a supersymmetric situation where you introduce partners that will not be physical. So it's not the real supersymmetry that they're looking for at the LHC. This is just using the technology of supersymmetry to introduce partners that would provide the necessary cancellation. And then you break the supersymmetry to lift those partners out of the physics that you're interested in. So that would be one approach. I've done various calculations with Steve Pinsky doing supersymmetric gang-mills. I'll talk about that a little bit as I go along. But our main focus has been on using polyvalors. Now that's not polyvalors in the usual way. That's not subtracting loops, but instead subtracting within propagators. And so we introduce the fields in Lagrangian and use those to cancel things. So in any loop you get two subtractions rather than one because you're subtracting on both of the lines in the loop. So for example, if you have a gluon loop like this, the subtraction would not be that you put a polyvalors all the way around here and then subtract that from the original, but instead you have four different things. You have polyvalors here, regular here, all regular or all polyvalors, all four of those things. And then combine in such a way that you get two subtractions. That corresponds to putting the polyvalors particle in the Lagrangian all the way from the beginning. And that means in the numerical calculation you will have those additional polyvalors particles in the calculation too. So it makes the calculation bigger, which makes the numerics more challenging. And the reason that we're interested in doing things in this sort of way is that for maintaining gauge invariance there is a way, once we include the polyvalors particles to be able to quantize in a covariant gauge. We're not restricted to light cone gauge the way normal light front calculations are. The downside of course is you have to bring in unphysical degrees of freedom, you have to bring in ghosts and all that sort of stuff, but we feel that's an important price to pay because if you do a calculation that disobeys symmetries, if I can get this to show it, put on the projector. Yeah. This is a calculation that we did in QED. Sophia and I that is truncated is just one photon, but we're including polyvalors photons to regulate the theory. This is in three plus one of course. And the horizontal axis there is the mass of the polyvalors photon and the different curves correspond to different polyvalors electrons in the calculation. And you can see that you have to drive the polyvalors mass up to very high levels in order to get rid of the dependence on, sorry if I had to say this, the anomalous moment electron rescaled to the Schringer value. You get a huge dependence on the cutoff unless you're out at a very large momentum, which of course would cause a very large mass, which of course would cause all sorts of problems if you're doing a numerical calculation. You want to be able to calculate at some reasonable cutoff value, not something that's approaching infinity. And this all happens because we've broken a symmetry in the theory. We've broken the chiral symmetry that occurs when the mass of the electron goes to zero. By inserting a second polyvalors photon, we're able to restore that symmetry. And once you do that, this heavy dependence on the regulating mass goes away. So we feel it's very important to be able to have gauge invariance, at least at the level that you say that in terms of perturbation theory where you do a covariant gauge with an arbitrary gauge parameter and check that you're independent of that gauge parameter. One calculation that we did, oops, wrong direction, sorry. One calculation that we did again in QED, looking at the anomalous moment of the electron in an arbitrary covariant gauge, the zeta is the gauge parameter. The zeta equals zero is the singular limit, so there we get huge dependence on zeta. But generally speaking, it's relatively flat. There are, there's violations there still, it's not perfectly flat, but that's associated with a fox-based truncation as far as we understand it. But as far as just doing a calculation with an arbitrary gauge, we're essentially independent up modulo those fox-based truncation problems. So we wanna be able to extend that sort of thing to a non-Abelian theory. It's a meaning QCD. And our proposal for doing that is what's written here. This, at least this is the starting point, okay? There's additional stuff that has to be added on. But the base Lagrangian for QCD, these K indices refer to summing over, for K equals zero would be the physical one, all the higher values of K would correspond to polyvalor's gluons. Same for I, they would correspond for I one and higher would be polyvalor's quarks. And then the interaction between the quarks and the gluons. And of course, you have self-interactions of the gluons built into this field tensor. But this field tensor is constructed not quite the usual way. These fields are summed over all the available fields. And they are arranged in just such a way with these coefficients, such that these fields are what we call null. Satisfies this constraint right here. The commutator of the creation annihilation operators associated with this field, which are linear combinations of these, this A with this A dagger commute, you get zero. So actually a null metric. And the way that's achieved is that these RKs are the metric of the individual field. So for example, the commutator for the field with the Kth index would be proportional to RK. And RK is either equal to plus or minus one. So for a negative metric field, that would mean that you were inserting RK values of minus one. And you can see it shows up here in order to maintain a positive kinetic energy associated with the polyvalorous field labeled by K. So this structure is what lets us insert polyvalorous particles into QCD as fields within the calculation. The gauge invariance of this Lagrangian fails for an ordinary gauge transformation. And that's because of the way this interaction is structured. This interaction allows for what we call flavor changing, meaning changing from one polyvalorous to another or between a physical and a polyvalorous. So when you go to this kind of a graph, for example, at this vertex, you might have an ordinary part, ordinary gluon coming in, but this could be coming out as a polyvalorous gluon and this coming out as either polyvalorous or not. And this could be not polyvalorous. So you have all the possibilities happening here, and that's what this sum represents in the calculation. That sort of thing messes up the gauge invariance for an ordinary gauge transformation. The other thing we're gonna want to do is we're gonna want to introduce a mass for the polyvalorous gluons in order to be able to remove them from the spectrum. And that would also, in general, if you do it just explicitly, would also mess up the gauge invariance. But as far as fixing this problem of mixing the fields, we generalize the gauge transformation to itself involve mixing. So this term does not involve just the kth polyvalorous particle, but involves this null field, this A without the index. And so the gauge transformation itself also mixes. And it mixes also with respect to the gauge function here. This lambda without the index is a sum over all the different lambdas associated with the individual particles. So if you extend the definition of the gauge transformation for the gluon field and do a similar thing for the quark field, or again, this psi is this null combination, then this Lagrangian is gauge invariant. Not only that, but the combined field, this null field is abelian. Obesion, abelian gauge transformation simply because of these constraints. If you then rewrite this Lagrangian, it looks like this. So here we have a term for a massless vector particle and we have a term for mass degenerate quarks. I also have to worry about splitting the mass degeneracy for the quarks. We have this, a three gluon interaction and we have the quark gluon vertex here. All of these interactions involve only the null combinations and because of that, you get all the necessary double subtractions on any loops. You get the subtraction on the propagator and so it's essentially equivalent to using higher order derivative as a regulator in the original Lagrangian. So, yeah. Pardon? I was just about to thank you for bringing that up. The quartic terms disappear, okay? When you go from writing it in this form where you would expect that because the field tensor has a quadratic dependence that you would then have quartic dependence but they actually disappear from the Lagrangian. They cancel out because of those constraints and the way the Lagrangian is structured. The way that physics comes back in is that we've basically taken that local four point gluon interaction and made it non-local, mediated by a polyvalor's gluon. Which of what mass? The polyvalor's gluons, what is their mass? Well, I have to add terms to the Lagrangian to give them mass but the mass is something that is tunable, okay? And we would drive it to high values to remove them from the spectrum. But are you not? And it's in the limit that that polyvalor's gluon goes to high mass that this, with this being polyvalor's here, this then reduces down to the local interaction of four ordinary gluons. So it will recover the quadratic interaction that's in the original QCD Lagrangian in that limit. But your polyvalor's mass, I guess, is going to preserve this new gauge invariance or? No, I have to introduce the polyvalor's mass by adding another term to this Lagrangian in a way that does not break the gauge invariance. And what we first thought we might do is use a Higgs mechanism to do that. But it turns out that if you try to construct the Higgs type of theory with this 5-4 interaction for null fields, everything has to be null for all the interactions, then it's ill-defined as far as finding a symmetry breaking point for that Higgs field. And not only that, but even if we succeeded in doing that, the massless mode in the gluon sector would be one of these null combinations. It would not be the physical gluon that would be massless. And so that is unacceptable. But there's an alternative way of introducing an interaction with a scaler that maintains gauge invariance modulo, the fact that it's tangled up with the gauge fixing term. So the gauge fixing term depends both on the gluon field and the scaler field. But once you add those terms to the Lagrangian, you have a gauge fixed Lagrangian in a covariant gauge with a gauge parameter, just like I was talking about earlier. And you have gauge invariance up to the point of having fixed the gauge, okay? So you have this, just the only remnant is gauge parameter. Beyond that, you can add ghost fields and establish a BRST symmetry for the theory. And the only nasty thing that happens there besides the fact that you're adding lots of fields is that the interaction of the ghost fields has to be non-local in order to be able to satisfy all the ways that things are coupled. And you know that as you renormalize the theory that form of the non-local interaction is preserved somehow, or? The renormalization of the theory is still an open question, okay? The proposal is that with this structure if we've maintained a BRST symmetry that it gives a handle to be able to attack the theory and establish that the renormalization works, okay? That is what we've been putting together, okay? So the other thing, those of you who've done, if you've done any life-run calculations or looked at them, there's something called instantaneous fermion interactions, which is another four-point thing. And those also are split into a non-local interaction mediated by a polyvalars and recover the original in the massive limit for that. John, if I may mention, I think there are only two or three people in this room who have ever done life-runs. Okay, well, I can ignore that comment then. Ha ha ha ha ha ha ha ha ha ha ha ha ha. Nothing that I've said so far. I'm not curious about it, but we are very ignorant. And nothing I've said so far is actually specific to the life-runs in talking about this Lagrangian. But of course, our intent is to use it in a life-run calculation. Oh, and also for the massive gluons, in order to be able to quantize them, there is a method due to Struckelberg, which is a generalization of the group of boiler. And you have to have the four polarizations, the physical two and the unphysical two. And you give a negative metric to the scalar polarization, or rather the opposite metric, because if you're dealing with a negative metric, polyvolars, you have to give it that scalar polarization, a positive metric. And the generalization for working in an arbitrary gauge as opposed to Feynman gauge is that the mass associated with that polarization has to be different. It becomes gauge dependent. But everything else still goes through in structuring the theory. Now, for numerics, we've been doing, I've done lots of calculations using the Lansage method and in particular, done Eucala in three plus one. And we've gone up to about 60 million states. And this is with a polyvolars regularization built in. And that means that we have an indefinite metric because we have these negative metric polyvolars particles built in there. And so you don't have a Hamiltonian that is Hermitian. It is self-adjoint with respect to a transformation that takes into account the fact that there's an indefinite metric here. And they... Which theory are you considering? Well, I'm just saying that done calculations in Eucala three plus one. But this theory is not that simple. It could be free, so it does not exist. Well, you can do... You don't expect a well-defined continuum limit. We do it within the restriction that there's only one fermion and we're looking at the dressing of that fermion, okay? And so you can do the calculation in various ways and compare. But in any case, when I'm talking about the technology required to do that kind of calculation, there is a method that I developed. You have to use a special algorithm related to the basic Lansage algorithm that takes into account this indefinite metric. And we can handle things in the way that you would normally do for an ordinary symmetric Hamiltonian matrix. Now, these are done with a method that's called by the acronym of DLCQ, Discrete Light Cone Quantization. We're dealing with coordinates that are on the light cone and putting that in a box. And that means that the corresponding momentum component that is conjugate to this X minus coordinate, the T minus Z coordinate, this X plus is E plus PZ. That, because of the restriction to the box and periodic boundary conditions, this is then restricted to being a multiple of pi over L. And so you discretize the whole problem. The key eigenvalue problem that you're trying to solve takes this form where this is the result you get for the total momentum, P squared being M squared and this being P plus times P minus and minus P perp squared. So the P minus, which corresponds to the, being conjugate to the light cone time X plus, that is what propagates things forward in time. That's the key Hamiltonian that you want to diagonalize and you do so within a basis which are eigenstates of P plus and of P perp. And the eigenvalue of P minus takes this form. It's quite common to do the calculation where we've multiplied through by P plus and cancel this off and to work in a frame where P perp is zero, so that part is gone as well. So now it looks much more like what you might have expected where you have an operator on the state gives you M squared times the state. And so you expand this in a Fock basis that has occupation for these different momentum values or whatever your fields are that you're representing within this state and you then form a matrix representation that's associated with this discretization and you diagonalize that matrix in some way. And they are typically quite large and so you have to use Lansage techniques and if you're dealing with polyvalors then you have to do this special algorithm that takes into account the indefinite metric. Now I wanted to also mention that for Lansage the way that works is it gives you this tridagonal representation, we've already heard this earlier this week, gives you a tridagonal representation of the original matrix that is a small tridagonal matrix. You diagonalize that and the eigenvalues, the extreme eigenvalues are close to the eigenvalues of the original matrix. What is perhaps less well known is that the intermediate eigenvalues in that Lansage matrix give you a representation of the density of states of the system. And so you can do calculations of things like correlation functions by inserting a sum of the decomposition of the identity using the output from that Lansage calculation rather than having to do a full exact diagonalization of your original matrix. And some of the calculations we've done that way were in what is called supersymmetric DLCQ. And there the idea is to quantize the supercharge and then calculate the structure of the P minus Hamiltonian, the light cone Hamiltonian from that supercharge. And this is not equal to the P minus you would get from taking a DLCQ approximation to the P minus of the theory. And so you can calculate, you can do a calculation in a supersymmetric theory where the supersymmetry is exactly preserved. It's not violated by the numerics. If you do it in ordinary DLCQ, you get violations of the supersymmetry that are associated with the box size basically. So by doing calculations in that way we can handle supersymmetric theories. And we looked at correlator of stress energy for a couple theories, one of which had a duality with a supergravity theory and one that does not. So for this 2-2 theory, this is in one dimension, one plus one dimensions, supersymmetric theory, reduced from a higher number of dimensions which then brings in additional fields. So you've got either 2-2 combination or 8-8 combination. On the left is for the 2-2 and on the right is for the 8-8. What you're looking at is a recent. Which of these theories is 2, 2, 8, 8, 8, 8, okay. The 8-8, no the only important point is that for the 8-8 there is a duality that exists that predicts what based on a weak coupling calculation in supergravity, what the dependence of this correlator should be on R, which is a measure of this separation between the two points in the correlator. And at very short separations it's supposed to go like one over R to the fourth and that R to the fourth behavior has been removed. So the zero that's up there on the left corresponds to the very short distance behavior that I am naively expect. As you go to larger R, it's supposed for 8-8 it's supposed to go to one over R to the fifth which corresponds to the minus one. And our calculation falls apart when we soon get past the place where it does reach minus one. Whereas if you do a calculation in 2-2 it does not go to minus one. There is not a duality there because it does not have the same kind of behavior for these intermediate values of R. So you actually do a calculation where you could see that doing strong coupling super Yang-Mills comparing that to the weak coupling supergravity calculation that you can extract a match between the expected behaviors for this correlator. And doing that calculation within the supersymmetric DLCQ approach. This might be somewhat technical but I mean some of these supersymmetric theories have modular spaces. What do you do with that? I mean, usually in two dimensions you do have to worry about some kind of way function on the modular space. No. In large sufficiently low dimensions. That is, you're not actually sitting in a particular bed. Oh. The way function, the grounds, there's a grounds to you. Well remember, we're doing this on the light front and so the vacuum is trivial. And there are arguments that the zero modes, in this case, for the supersymmetric calculation that the zero modes are decoupled that they don't enter into the calculation. Now more recently, we've been doing calculations not using DLCQ but instead using function expansions. And depending on what you're doing that would either involve that you have foxtate wave functions so you're expanding your state in terms of some set of foxtates just generically writing it with a label N but of course you'd be summing over numbers of particles and momenta and so on. And you have wave functions associated with the different foxtectors. Now these functions, you can imagine expanding in some basis. And the reason you might want to do that is for DLCQ you're forced to resolve things according to a particular scale. Now what happens for the light front is that this thing is always positive. Okay, even when Pz goes negative E is always big enough to make this whole thing positive. And so you're always working at a fixed total P plus which defines an integer traditionally called capital K such that when you come over to here this whole thing is independent of the box size and instead the limit that you want to consider is K going to infinity. And that back here corresponds to taking the box size to infinity as well at fixed P plus. So the way the calculations are done is that this ratio of an individual momentum to the total momentum is always being controlled by just this ratio of integers which means that this capital K sets the resolution of the calculation and if you need to know what this wave function looks like close to zero or one in this ratio you've got to take K really large to be able to see that but DLCQ forces you to use K points within the representation of this function. And so if you're doing a calculation where the function is varying very rapidly at say zero and also near one and you're dividing it up into one over K segments the computer spends a lot of time dealing with this where there's nothing going on and it's more useful to look at doing basis function expansions that instead let you represent things more carefully in the regions that are important and not worry so much about the interior. So we've been looking at using basis function calculations in order to take that into account and that sort of approach has been particularly pursued by James Vary and collaborators at Iowa State and they have an acronym which I always get transposed I always have to look it up every time. Basis light front quantization this thing is actually a hybrid where they're using DLCQ in the longitudinal and they're using basis functions in the transverse. So representing the transverse dependence of this function is in terms of summing over either oscillator states or they also use the holographic QCD states that come from that kind of modeling that's going on and DLCQ is used in the longitudinal direction which is what I'm describing here. And they've been doing some very large Landsauce calculations James has a parallel Landsauce code originally developed for nuclear physics calculations and many body calculations. Let's see. The other thing I wanted to mention is about box-based truncation. There are a couple different ways to try to deal with this but one of the key problems has to do with looking at this combination of diagrams which would enter into the lowest order word identity for a gauge theory. If you're cut off in the box-based is to include only one gauge particle then this and this are gone and you're left with only this and so you have violations of the word identity you have violations of the cancellations that should take place between the infinities that are associated with these which is another way of saying you've lost the word identity. And the other aspect that comes into all this is that if you think about a situation like this versus a situation like this when you're doing a calculation this self-energy is going to be different from this self-energy because you're going to take into account this additional spectator in the structure of this contribution and here there's no spectator so you get what are called sector-dependent self-energies. Now one way that people have tried to get around this originally proposed by Ken Wilson is that you have sector-dependent parameters in the Lagrangian so you make the bare mass and you make the bare coupling dependent on which fox sector you're dealing with and you tune those in order to cancel out these issues so they go away, you sweep them in. It turns out that when you do that for the coupling it makes the wave functions ill-defined. Now that's kind of a technicality in the details of that but adjusting the mass is okay and we've done some calculations that way but adjusting the coupling to make this work leads to all sorts of problems having to do with the normalization of the wave functions because you're basically converting wave function renormalization over to coupling renormalization and it becomes quite a mess. Our alternative for this is something called... It's clear in any case that we are not supposed to truncate to just one gluon or two gluons in realistic calculation and if you don't truncate to just one or two but if you truncate to a hundred then is it clear that this problem is going to go away? You still have the problem at the top sector as far as the uncanceled divergences are concerned. But those should somehow be taken into account by all the other renormalization procedures and so on. The limit should be... It's just an issue of not being able to take the limits properly and then you try to like... Well, it's how you're going to define doing that limit that these renormalization procedures you're talking about are what get very messy as far as how you're actually going to handle that and of course you can't do it with a hundred it's got to be some much smaller number and so it really does have an effect. And the sector dependence of course propagates all the way down because each time you go to a lower sector you get a more and more complicated self-energy correction that could happen. Just to note that at fixed time you never have to do this trick. You don't adjust... You do renormalization at fixed time quantization you do it once for your Hamiltonian and you use the same Hamiltonian in all sectors and it works without any extra tricks. But when you truncate fox-based what do you do with the uncancelled? Truncated, you know truncated fox-based. So in that case when you are able to take the truncation high enough you don't see any pathologies which you need to fix by itself. If you can take the truncation high enough, yes. We're looking at situations where the calculation is so complicated that you can't get to that regime where you can just say that they're okay, they're pushed away. Let's take 5 to the fourth which is a very simple theory. In that case, do you have to do any dirty tricks like that? Sorry that I'm quoting... 5, 4, 5, 4 and 1 plus 1? No, of course not. Of course not. But you did it. Some people in the common community still report results for the 1-2-2 truncations. Yeah, yeah. So in life-front calculations that have been done these are very small truncations because in 3 plus 1 you get all your transverse degrees of freedom and everything else, the calculation gets very huge very quickly. So things have been kept at a low level. But of course in 1 plus 1 these issues don't come up. That is very new to such calculations. It's as a starting point, yes, of course, to try to understand what's going on. So if your theory is weakly coupled when it's just a perturbation, if it's strongly coupled, then how can you hope that from creating two gluons or two photons you'll get anything which resembles the realistic situation. So how is this a starting point? It seems to be very... Well, I think that... I agree, I agree. And what I'm about to describe gets around that. But the general attitude has been... I get into arguments with people about this, but I don't mean with you. I mean with people in the life-front community that the idea has been that there's some intermediate region where you can do calculations that you can make some sense of that are just beyond what perturbation theory can do. And so you can see what's going on in the theory to some extent. But of course you can't do a full, really strong coupling calculation without being able to bring in many fields. Yeah. They have a method that we call the life-front coupled cluster method on LFCC for short. Sophia will talk about application in 5.4 in the next talk. The basic idea of this method is to take technology that's used in many body calculations where it's called the coupled cluster method and the name makes a little bit more sense. But basically you look for a solution written in this form where this phi is some base state. If you're talking about a proton, it might be your three-quark state. If you're talking about 5.4 and you're looking at the odd states, this would be the one constituent state. So just the a dagger on the vacuum. Anyway, it's something that is a very relatively simple thing. There might be a wave function associated with it, but it's still relatively simple. But it has the right quantum numbers for what you're trying to calculate. The square root of z is just something that maintains normalizations the psi and phi have the same normalization. This t is the key to the whole thing. This operator t creates particles and it might create only one or it might create two, but in any case the exponentiation of it of course introduces all the higher powers of t and so this thing includes all the fox states. So we don't do a fox state truncation at all. Instead what we do is we truncate t. We look at making approximations to this operator t, keeping just a few simple forms. If you're doing QCD, that would correspond to a gluon-viarification type situation. So you have one gluon going to two. So you get a particle increase. You would have gluon emission by a quark and you would have gluon doing pair production. So your initial for QCD, your lowest order approximation to this operator t would be involve these three things. And they increase particle number. They take your three quarks state here to a state that has, you need this, this would have to happen first. I'm sorry, this would have to happen first to produce a gluon. And then when it acts again, you would pick up two gluons and so on. So this would generate all the fox states in QCD. So is t bilinear in the creation of annihilation? It would involve one annihilation, in these cases it would involve one annihilation and two creation operators. And a function that says how the momentum is divided between them, is an unknown function. So you'd have three different functions here for these three different types of vertices that would appear in t, which would be a sum of these three things. And then you get all the possible combinations that would happen when you take powers of t acting on this. So you would have no fox-based truncation and there are three functions to be determined in order to find this state. And there's a fourth function, the wave function of your three-corps state that's sitting here. So you feed this into... Yes. And it's exact as long as you keep all possible t, an infinite number of functions. But you're assuming that higher fox states in your exact wave function are correlated with the low-cubation number states in a particular wave, which is... When you make the truncation, yes, now there are connections... Practice you do. So once you truncate to, let's say, just this diagram, is there a physics reason to expect that this should be a good approximation in a strongly coupled situation? Oh, well, whether or not it's a good approximation for QCD is an open question. We've checked it within some simpler theories and it looks reasonable. And what you then have is, from your original calculation, you have then instead an effective interaction, an effective Hamiltonian, which is a transformation of the original Hamiltonian according to this function t. And if you project this now onto the valence state, then that gives you an equation that determines the wave function that's buried inside this state. And if you make a projection onto all the higher fox states, this right-hand side becomes zero, and you get equations for the functions that are in t. And they are nonlinear functions. And if you linearize them, if you expand them out, that then reproduces a subset of diagrams to all orders. So this process resumms, does a partial resummation of perturbation theory to all orders. So we're keeping all the fox states. And the other thing that happens is that these self-energy contributions are the same in all the sectors. They will involve one of these functions from that t operator, at least on one of the vertices, but they become independent of any spectators that are going by at the same time. So can this be thought of as some coherent state basis for the fox state? This is a generalization of a coherent state, basically, if you look at... because of the structure of t, you have one or more annihilation operators, and then at least one more than that of creation operators. So it sort of looks like a coherent state in some sense, but it's not quite the same thing. But it's the same in some... it's a generalization of it. And in looking at testing this idea in a simpler theory, we happen to have picked 5,4, and 1 plus 1 is one of the places to do that. And in doing that, we found ourselves looking at issues associated with the critical coupling and all that, and came across very nice work that Slava and collaborators have done and that now some of the rest of you have also done. And that's really the topic of the next talk. But that's how we arrived at doing that, bringing me back to 5,4 again, looking at it from a different perspective. Last thing I wanted to comment about, I've already said a few things, and that's about the vacuum and zero modes. Most recently, the thing that we've been looking at is a parameterization that's due to Kent Hornbostel, where you define a set of coordinates C is not the speed of light, C is just a parameter. Sign's right here. Where C going to 1 would correspond to equal time calculation and C going to 0 would correspond to a light front calculation. So we can set up the Hamiltonian problem with respect to these coordinates and look at what happens. We do the calculation in equal time and do the calculation in the limit as this parameter C goes to 0, which then approaches the light front to see what happens. In order to try to understand how things map over from one calculation to the other. I don't have time to get into the details of how that works. But here is a calculation done. I've got to get to that. You mentioned the name, the equivalent reference, which you should look at. Yeah, this is Kent Hornbostel and it really goes back a long way. It's Vizre B, Volume 45, 3781 from 1992. And others have worked with this as well. It's not really a complete reference, but that's one of the earliest appearances of doing this sort of thing. Now this isn't 5, 4. This is a free scaler that's just been shifted by a constant. And this is what the spectrum looks like. And there is one state, which is a coherent state. It's actually an exact eigenstate of the Hamiltonian. And that has rescaled by some factors. It gives you this minus 1, which is independent of C. So the equal time calculation is way over here. The light front calculation way over here. And in between it's C independent. The state itself reverts to the trivial vacuum in the limit as C goes to 0. But everywhere in between it's a coherent state that is basically the unitary transformation that shifts the field back to where it belongs. And as you approach the light front all the excited states in the calculation blow up because they're going like 1 over square root of C. Because zero modes make a contribution that goes like 1 over the square root of C to the light front energy. So they all blow up except for this 1. So you retain that vacuum. And this sort of example made us think, well maybe if you do a calculation in 5, 4 what the vacuum is supposed to look like when you get over to the light front. But it doesn't quite work. Here is a calculation at C equals 1 that is roughly comparable to the calculation that Slal has done with the same sort of cutoffs. And with the variation here is in how many different modes you include in the calculation up all the way up to having 20 constituents. This is 5, 4. So this is the vacuum and the even sector and it can do the subtracted spectrum. This is again at C equals 1. But if you look at different C and you get the same sort of plot for the difference as C is changing from 1 towards zero. As it goes towards zero on the left and up. But you still get this degeneracy between the even and the odd lowest states happening at approximately the same place. So it's qualitatively consistent with the equal time result for the critical coupling. The problem is that if you look at what's actually happening to the lowest state as a function of C at particular choices of the coupling is that it's going as C equals zero is way off at infinity to the right. These are all going down okay and they're all headed to minus infinity. And one can see that in the calculation just looking at a simple vacuum bubble that is a contribution to this. If you calculate this thing in these coordinates this represents an energy shift that goes like one over C to the three halves. So as C goes to zero the vacuum bubbles drive that even vacuum state in equal time is driven down to minus infinity. And so you can't take this calculation and graft it on to a light front calculation and understand the vacuum instead you would have to do a calculation entirely within these C dependent coordinates and look at only differences just as we did as we do here this is looking at a difference everything's fine even though as C goes to zero things are blowing up for the individual states the difference remains stable so we'd have to do all the calculations within this coordinate system and look at the limit for those calculations rather than being able to do a vacuum calculation in these coordinates and add that to a light front calculation in the other. And so we're still thinking about the vacuum and how to incorporate pardon? This is unpublished this is and in fact the toner is still wet on this these plots this is just done within the last couple weeks. And we're already already looking at other possibilities for trying to understand the vacuum in zero modes and this is something that has the trivial vacuum of the light front has always talked about as being an advantage to doing light front calculations you don't have to bother calculating a vacuum well as we all know from this meeting this workshop the vacuum is important and trying to understand what's going on with respect to critical coupling in 5.4 you need to understand the vacuum and so do a calculation on the light front is non-trivial and the next talk we'll get into some of those issues as far as what can you do in the light front before you truly understand what's going on with the vacuum in all its respects so let me conclude that we've tried to indicate how we've looked at these different aspects that enter into trying to do a calculation that would lead to being able to do QCD at least within some range of the physics we could for example look at relatively simple relatively simple calculations for a heavy cork system or glue ball systems within a quenched version of the theory so you don't have any light quarks running around James Berry has been looking at doing that type of heavy quark calculation not within our formulation but within their own way of approaching things as far as the regularization and the discretization is concerned so there's certainly some work being done already in that direction on QCD thank you for your attention and thanks for having us here we've got time for questions in the regularization that you described at the beginning have you done perturbative checks of this it was hard for me to follow exactly everything have you done perturbative checks? within the context of QED yes we haven't done the full non-abelian no we've put together to work out how the BRST has to work how the ghost fields have to be there but we've not done a calculation we've not done a calculation in perturbation theory which would be a nice thing to do to check how this could work yeah in this slide you still have so you're calculating the ground state of the electron energy and in the limit that's c goes to zero this is a light cone it's the light cone limit so what's the reason it's not becoming trivial in that limit you're sort of doing a light cone limit the light cone calculation where you're not getting zero electron energy well you see there are no in the naive light front formulation there are no vacuum bubbles in the way somewhere and they're just not there on the light front so what's the reason why if you just take what's different about your c goes to zero formulation that so it's not quite a light cone calculation because you're not getting zero vacuum energy so I just how should I think about how that's different from how should I think about how this is different from the standard like from calculation or what would be zero as long as c is not zero as long as c is not zero you have ordinary canonical quantization rather than a light front quantization and so everything goes through in the normal sort of way and you have vacuum you've got tadpole contributions you've got vacuum bubbles all that stuff is there it's all there it's just that instead of giving us something that there's that for which there's some residue when you get some finite residue when you arrive at the light front instead it's infinite and so it just has to be thrown away from this when you're taking this kind of limit and or put it a more positive way if you could try to simulate a light front calculation by doing the calculation at finite c and taking the limit as c goes to zero and only consider energy differences I think this check is very instructive and very interesting to see all the details when they come up but just understand the way this computation is organized you can not really describe so when you really have to compute individually each one of these points so there's no simple rule which tells you that if you compute something that c equals one which is the left side of your plot and then you can predict basically all these points automatically you really come to do an independent computation because the truncation somehow breaks Lorentz's variance is that my interpretation? well in fact the way the calculation is done here is that the states that were kept c equals one are kept all the way through so we don't do an energy cutoff that varies with c the determination of the Fock basis is made at c equals one with the cutoff and energy in the way you do it and then so without trying to understand of course if we were to do exact calculation then all these points would be predicted by Lorentz's environment but in order for this to be an independent check it really has to be done in presence of a cutoff which breaks this correspondence otherwise otherwise I don't see how it's well you have to remember that the energies associated with zero modes they blow up over the square root of c and modes associated with negative momentum blow up as one over c and because of that if you invoke an energy cutoff you'd be constantly removing states as you go to c equals zero and eventually you'd reach a point at some finite value of c where the only thing there's nothing left but the trivial vacuum that might be an answer to your question there if you're looking at it when you impose an energy cutoff if you impose an energy cutoff the remains applied to the states as computed at a particular value of c then eventually all the states are removed and there's nothing left but the trivial vacuum yeah you might yeah