 Okay, well, it's a great pleasure for me to be here. Thank you for the invite. I found with discussions with PODL that we have an incredible amount of commonality, lots to talk about, and I hope we can continue the conversation and do some nice work together. Okay, so here's my title, Particle Pulerizabilities in TDF from Lattice QCD. What the heck do particle pulerizabilities in part on distribution functions have to do with one another? Well, let's find out. Here's a little picture. This is just an introductory picture. Here we are. Let's see, how do I do this laser deal? Maybe this one like there. Oh, there we go. Okay. So this is the main technique or device or technology for investigating at a sub-nuclear level, what's going on for mesons, baryons including gluons. And now people are extending this to nuclei. Well, how good is Lattice QCD? Here is a picture. Stole this from Andreas Krohnfeld and it's just a spectrum calculation. Everything from the pion up to omega particle. Not all of these are outputs. Some of them are inputs because you need to know the quark masses, you need to know the gluonic scale. Another thing you notice is that you have air bars. Well, how can you have air bars on a theoretical calculation? Because these are all Monte Carlo calculations. Okay. How does it do with alpha S? Well, it fell off the screen down here. Anyway, this is alpha S strong coupling constant at the Z boson mass. There are seven different techniques here. Here's the Lattice. Here's the overall fit and it's working pretty good. Okay, let's start off with some ancient history. Here's a paper I wrote in 1992 is in the main proceedings for Lattice QCD which used to be paper, nuclear physics proceedings and I was doing what is called charge overlap measurements. So you have particles on either side, time side of your Lattice in between you have charge density operators. By moving these things around, you can measure things. What I found is that you can, when the time separation of these things are large, you can make contact with the elastic limit and you can measure form factors. Okay, but I realized also, before you get to the elastic limit, other things were happening. So let's look at that. All right, so let's go over this in some detail. Here, I'm showing some of the form factors that I got from this method. This is also called a four point function in Lattice QCD. Why do you have, why is it called four point function? Well, you have your two protons or pions on either side and then you have two operators. You have charge densities that goes up to four. Okay, so you can hear, here I'm showing you can get phenomenologically relevant quantities for form factors. This is vector dominance, it works. But let's look at this, okay, structure function. So what do these four point functions have to do a structure function? Well, here's a well-known quantity. It's also a four point function. Okay, so let's look at this. This is the two charge densities. One is at the origin. The other one is at position X. There is a four dimensional. For a transform that's doing the first thing that you can do, there's gonna be two different approaches to this. First, I'll just stay in Minkowski space and then try to put it on a lattice. So here's some of the discretizations you have to do. And you can now, instead of an integral over X, you now have a sum over positions, but you also, what you can do in lattice formalism is insert a complete set of states. So you can do that. When you do that, you find you can do the integral. This is a standard expression. We're still in Minkowski space. You have alpha and beta sort of coefficients associated with the currents, okay. Well, structure functions are contained in there. And so how do you get at that with a lattice calculation? Let's take something similar, but now we're in Euclidean space. Lass QCD works in Euclidean space. It doesn't make any distinctions between space and time. And then your correlation functions are no longer oscillatory, but they become real exponentials. So notice this is a time-ordered product of currents. Let's say that we're working with charged pion. And let's also say that we're going to do a Fourier transform of this thing. Notice I'm summing over both X and R, but this R is a relative position and I'm going to do a Fourier transform of that, okay. So this is what happens. You have a sum of exponentials. You have a product of matrix elements. Well, the connection between nine and seven is the inverse Laplace transform. Okay, so on the right-hand side, you have lattice data. That is measured in Euclidean space. But over here, you have what you want. You have components of structure functions. Notice the I. So this involves an interpretation that you're extracting the imaginary part of this function. So I say you have to do this, but when you extract it, you're actually not doing something that you would expect to do in an experiment or in a presentation. In presentation, you'd expect to get structure functions and things like that is a function of four momentum squared. Here it's the three momentum squared that's actually fixed on the lattice. I talk about that. I talk about ways of fitting it. I talk about kinematic constraints. So here we go. The point is you can either start off with the W's and do a Laplace transform, or you can start off with the Q's and do an inverse Laplace transform. Often you see a silly statement in the literature that you don't have Euclidean data at imaginary times. Well, you simply do analytic continuation. It's a simple procedure. So it actually turns out to be easier to make on-zots is about the structure functions. I do a Laplace transform and see how well that fits the time data. Okay, let's try that again. One more time. Okay, I probably did something wrong. I'm like Pauly when I get near a laboratory. Things don't work out. Okay. Ah, all right. You've got the magic touch. Okay. That guy is very demanding. Okay, I already made this point. The lattice is actually at fixed spatial momentum transfer. Kinematic constraints must be respected. The data should not be well-fit by sum of exponentials. There's a quasi continuum assumption that comes into play here. Whatever time behavior you get, it should not be well-fit by two or three exponentials. Then it is going against your assumptions. And you need large Q, but small A. If you take the lattice form of the momentum Q times A, that quantity should be small compared to one. If it's not, then you have problems with your dispersion relation in lattice. I'll make a point about that later. But you need large Q because you're doing, oops. I stole my thunder here. Okay. You need large Q because you're doing, supposedly doing part-time distribution functions. What that suggests is that you need it, what's called an asymmetric lattice where your spacing in the time direction is very small. Okay. All right, so I got all that in there. Everything is set up. People, I'm expecting people to run and do this. Google citations nine. Okay, so, but if you wait long enough, 28 years later, here's the calculation. So this calculation was done in 2020, it's by the Kentucky group. And so they're using protons, not pions, like I was talking about. And basically you want to extract things in as reliable a manner as possible. So let's see how they did that. Oh, okay, so let me make that point later. Okay, so here are the diagrams that come into play as far as the topology of the diagrams is concerned. So we've got connected pieces and we've got disconnected pieces. Is there a time, okay, 35, okay. So how much time does that mean I have left? Oh, okay, yeah, all right. So I'm doing pretty dang good, okay. So anyway, connected pieces and disconnected pieces, this is a same quark piece. This is a different quark piece. This is called a Z graph. These are disconnected diagrams. This diagram is a special type of disconnected diagram where the two loops are actually merging, but it's still a charge density or current density or something here and another one over there. And I'll come back to this later. The Kentucky group only does A, okay. What you want, if you're going to try to connect to part on distribution functions, you want something that's diagonal in flavor. You want U quarks, you want D quarks, that sort of thing. Okay, so let's look at that. Okay, so this is their correlation function that they found for proton up. Okay, these are un, these are unpolarized, okay. You can also do polarized distributions, but these are unpolarized. Some of them are final average over initial spin. And this is lattice time down here, okay. So things are falling, you've got your airbars, but it goes up. That's a contamination, okay. What you can show is if you insert a complete set of states in this sort of thing that you, your masses, your physical mass is always positive and your correlation function as a function of lattice time is always exponentially falling. Well, I shouldn't use the word exponentially, it's always falling, okay. So there is some data we can look at, but this data has to be thrown out. And I think the Kentucky group realized that that was contamination. Okay, so it's, okay, same sort of, okay. So here we go, we got down. Notice that down has a slightly smaller positive values here. Here it is as a function of time. We got our contaminations again. All right, so that's what the data looks like. Okay, they do extract some structure functions, okay. They use these two set of results are for U and D. Here it is, U and D, the F1 structure function as a function of the energy parameter new using a method that they call maximum entropy. But they also use another technique called Bayesian reconstruction and you don't get very good agreement between the two types. Okay, here I'll make the point, the important point is that one needs not just small time separation of last points, you also need long time separations of the currents in order to fully characterize these functions, which should not be dominated by a sum of exponentials that should also have algebraic pieces associated with it. You need long time separation. So they had 20 points. There was contamination for at least the last five points. You need 120 points, right? So I would say that this hadronic, this called hadronic tensor approach, I would say that really hasn't been tried seriously yet. Okay, to sort of hammer home that point, let's look at their data. So this is for the up core and all I did was fit these nine data points with nine exponentials, but I required that the coefficient of each one of these exponentials be positive so you can interpret this as physical particles. And so, but I put a constraint on it. The constraint was, yes, the constraint is that all these coefficients have to be positive. So it's not a linear system you're solving, it's a minimum residual system that you're solving. But it understands it well enough to put these at zero. Now, I'm just showing I've, really these should be Dirac deltas, they don't have any wick. I just made it, I give it some width for you to see it. How does it fit? If it's the data, wonderfully. Okay, okay, so here is what they did for F1. This is their maximum entropy fit. Here's the energy. Notice they did not satisfy the kinematical constraints. This over here is the elastic limit. That's x equal to one. It has to go to zero. Or it has to go to some small value that's determined by squares of form factors. So their support is falling outside of the kinematical region. Here's the other higher order, highest value of new that's required. I think it's 3.56 GED for them. Okay, you don't have to do a sophisticated fit. Here is just a simple dispersion model, single resonance. This is a sort of resonance or assumption you would make in a graduate course where you have a dielectric constant and you say you have a single atomic level. Okay, so there's nothing sophisticated about this, but it does as well as they do. And I'm at zero down here. You do not have to be at zero at high new, which would correspond to low x. How well does that fit? It's pretty good. Can I compare it to Kentucky? No, I cannot because they did not give that data. Okay, if you go whole hog and get f2, 2xf1, this is what you get, okay. So this is as a function of a Feynman x, but remember, this is not a traditional PDF because at each value of x, there's an associated q squared. You have your lowest q squared down here, your higher q squareds up here. Okay, so two points. In order to extract these functions, it's important to have as long a time lattice as possible. I've already made that point. But in lattice, if you do that, what's gonna happen, you're gonna traditionally or what you expect with very large correlations with lattice data is that your airbars will go crazy, but they will not if you use the conserved current. The conserved current on the last is exactly conserved. It's up to numerical precision. It will constrain your airbars and will make such a calculation possible. What else is possible? Oh, that's possible. Okay, okay. Really, you need to go to high energies to connect with experimental results. How is this done when lattice dispersion relation is systematically wrong at high energies? With Wilson-like fermions, clover-type fermions, you have a Wilson term or a generalization of a Wilson term. And this thing screws up energy momentum dispersion at high momentums. You do not get E squared is equal to P squared plus M squared. So when you try to do this and you really want to connect to experimental data, you're going to run into a problem. But ask me later. I can tell you how to solve this. All right, here's the real stuff. Here's C-tech. This is at low momentum transfers. So the Q, not the Q squared is two GEV and here the Q is 100. You do see peaking. You do see higher peaking at lower momentum transfer. But this is nothing like that. Here the, this is much too high, but at least this is wrong in the right way. It should be too high because we're too low in energy. Okay, one more time. All right, so anyway, this is the real stuff. And I really enjoyed talking to Pavel about this stuff. Okay, so here we go. Here is a really good review. It just came out last month. I suggest that this is a really good place to start for people who are trying to understand how lattice can calculate these things. And there's not just one method. There's at least five methods that have been published. And basically all of them are trying to get this guy. So this is a particle. You got two charged densities in another particle. One of them is that, you can always assume one of them is at the origin. The other one is located at an arbitrary position Z. And here's the hadronic tensor approach. There's another closely related approach. She calls it Compton amplitude or approach or OPA with that OPE. So basically you get a bunch of moments. You do analytic continuation. It's a beautiful method. If you get enough moments of a function, you can invert that and find the function. And that's some of the ideas that go into that. Current current correlators, pseudo PDF. And this is a method that these authors are pushing large momentum expansion, also called LaMette for large momentum effective theory. So this is a function. They put their particles, they try to make as large momentum as possible, but is diagonal in that large momentum? P infinity here, P infinity there. These W's are Wilson lines. And basically these other techniques replace those current lattice objects with perturbative objects that they can sum over and get moments and then invert things and get functions. So that's an oversimplification that's one way of thinking about it. Okay, so let's see here. Here's an important point. The extraction of such quantities will depend upon the agreement of these various methods. The Adronic Tension Method still has not received enough attention. So it's not gonna be the case that LaMette works and the others don't and everybody depends on that. It's going to be an agreement between different methods that's going to give you confidence that you're really extracting something that's experimentally viable. And so all these methods are going to be important. Okay, so I said PDFs and polarizabilities. Well, actually I said polarizabilities and PDFs in the title. I reversed that in the talk, but now let's move on to particle polarizabilities. What did they have to do with PDFs? Okay, well, here's a paper that I wrote with my colleague, Frank Lee last year towards charge-py on polarizabilities from four-point function. So I'm back on four-point functions again. And actually I put one of my points on the wrong slide. So I'm actually going to try to go back up and pick it up. I know this is bad pedagogy, I don't know who Krista Smith is. Okay, well, it knows I'm doing something non-standard, so it's objecting. Ooh, did you plug your laptop in? Oh goodness gracious. Well, we can plug it in now. Well, we're going to take it. We can plug it in now. Well, we plug it in the back. Okay. We can plug it in now. Yeah, okay, there we go. Plug that little guy in there. I was worried about that. I had 80% on the battery, but I guess maybe I'm talking too much. Well, we also left it sitting here for a while. Yeah, okay. I guess rookie moves. In any case, it's my fault. I may need your help getting this back. Oh, all right. Yeah. Fancy touch file. Every time I try to use that, I break it. Right, okay, let's see if we can get this. Oh, it's not working. I think the power out there comes. Okay, there's something's happening. Hey, okay, it all again. Here we go. Oh, we're away again. It knows it's me. And so it's telling me that no, I don't want to cooperate with you. An interesting set of behaviors from the Mac. Yeah, well, you just have to adapt. Okay, there we go. Okay, all right. Sorry about that. Oh shoot, Walter, we need to reconnect to Zoom. Do you still have your email here? Oh, I can do that. Okay. Wow. Uh-oh. Yeah. Okay, so I should do the Zoom. Right, right. Now, let me try it again. I'm just gonna click on it again. Okay, there we go. There we go. Okay, so it should, I should do this. I should do that. And then it should open up. And then we got, oh, my fault. Hey, Shan, one of my graduate students. Hey, Shan. You know, shout out. And then that thing down there didn't work. Oh, it should have worked. It is going to work. There it is. Okay, are we actually back? Yes. Okay, hello everyone. Okay, so actually I was trying to go back and pick up a point. Oh, what's it gonna do? Oh, it's actually working. Okay, I was gonna pick up a point that I'd put in the wrong place. Let's see here. I put it in the wrong place. I put it here. Okay, here we go. Okay, here we go. All right. I put this in the wrong place. This is a point I was going to make associated with the polarize abilities. If you have a charged particle in an electric field, what is it going to do? It's going to accelerate. That makes your correlation functions very hard. It can be done. Likewise, if you have a charged particle in a magnetic field, you have Landau levels. That makes measuring polarize abilities because you're looking at things that go like E squared or B squared and both of those phenomenon are going to either add to that or mess it up. But you can do this. These are called external field methods and I've done these things. But what we're trying to do is do the most fundamental calculation possible. We don't want external fields. We wanna go to the effective field theory and measure coefficients using four point functions. So that's what's going on. Now let's go up through this stuff. Okay, all right. So here's CTEC and here's the Parcon stuff and here's my point about agreement of the methods. Now it's on polarize abilities. So what do the polarize, how do you do that? Well, it's actually the same diagrams as for PDFs, structure functions, whatever. Only here we're working with pion. So it's easier to do this for mesons than it is to do this for baryons. But these are the same types of diagrams. Here are my collaborators. So these three are the connected pieces. So connected insertion, different flavor, connected insertion, same flavor, connected insertion. So this is the Z-Graph and these things are disconnected pieces. They're important. Field theory says everything that is not forbidden happens. And so these things happen, they're C quarks that happen. They're not really disconnected. They're connected to, they're all connected by a bunch of glue on lines. But as far as the fermion lines, they're disconnected. And I'll come back to talking about disconnected pieces in a little bit. Okay. So how do you do this? Well, remember how we did this, how I approached the measurement of structure functions previously. I started in Minkowski space and set things up. Then you go to Euclidean space and find the relationship between those two. What you have to do in a little lattice laboratory in order to measure the things that are originally defined in Minkowski space. So what you do is, this is called the Compton tensor and good old time-ordered product of currents. We're using pions here. It's just a simple diagram that looks like this. We always use a laboratory frame. We call it a zero momentum bright frame. So our pions, we choose to be at zero momentum. And then we basically kick it in anti-kick it with these two currents. Here's the kinematics down there. So all this is in traditional Minkowski space, but you have to simulate this. And so you form a normalized distribution function that has reference to times and spatial positions. This is a hadron correlation field. This is another hadron correlation field. It's not just a cork field. And here's our two currents. Notice the normal ordering, not the time ordering. And this is what you need. This is what is necessary to make a connection with that Minkowski guy. So here we go. So I have a times zero, a time one, a time two, time three. If you want to do the same flavor piece, then it looks like this. And this is what you'd expect at leading twist for the diet. It would be the leading diagram at low twist. Okay, Euclidean spacetime, zero momentum bright frame and then do a 4A transform. You do a 4A transform on both the X one and the X two is associated with the two currents and you insert Q on one and minus Q on the other. The large time behavior looks like this and you can pull off the coefficients. The electric and magnetic polarizabilities are defined within this Compton tensor, okay? And you can pull off these coefficients, measure them and this is what you have to do on the lattice in order to measure these things. In each one of these, this is electric polarizability for charge prime. This is magnetic polarizability for the charge prime. In each case, you get a relative charge density squared term, either positive or negative. And then you get, you have to do a time integral which would be a discrete sum on your lattice. This Q zero zero is a correlation of two charge densities. So the zero zero is related to the zero component of this tensor. Oops, okay, oops, man, okay. When you're doing the magnetic polarizability in this Q one squared is the lowest momentum, spatial momentum on lattice, what you're trying to do is simulate a numerical derivative with respect to momentum evaluated at zero momentum. And the best way to do that is to form a quantity that has non-zero momentum, but the lowest momentum possible. Okay, a similar story for the magnetic polarizability, except the one-one means that you're using currents, not charge densities, say in the one direction, you have to choose your momentum in another direction, perpendicular to that. So for the proton, it looks like this. Here's our square of our charge densities. These things can be measured in four point functions as well, so all of this is available in that formalism. There are also references to anomalous magnetic parts and you have the same sort of thing, but now you're working with a proton. And these are the pieces that actually measured and simulated on the lattice, the things that will allow you to extract these quantities. So we're in the process of doing that. I'm sorry, it's anomalous part of magnetic moment. Okay, let's see here, all right, there we go. Okay, like I said, the hard part of doing such calculations are the disconnected parts, okay? And so we have techniques, we specialize in these things at Baylor. What is a disconnected part? Well, here's a proton, it's going its merry way, but if you have a dynamical system, then if you're using dynamical algorithm, hybrid Monte Carlo, then these sorts of things are gonna be incorporated, but sometimes you have external currents that need to be evaluated separately like I was showing you on those diagrams. And then you have to introduce what are called noise methods to evaluate those things. Special techniques, okay. How to measure any quantity on the lattice, noise methods. So here's a bunch of noises, the average of them are zero. You can choose your noise often at Z4. So it's just one minus one, I and minus I. The noises are diagonal. There are, there's a finite number of them. You cannot go to infinity, but you can simulate that. And so any inverse matrix element of the quark matrix can be evaluated that way using that noise method. So that's the good news. The bad news is that you also usually get huge air bars. Huge air bars. So you need ways of suppressing not the Monte Carlo air bars, but the noise air bars. Okay, so here we go. I was doing this bad presentation even when I was practicing. So practicing doesn't get you anywhere. All right, so here we go. So, okay, it turns out that the air bars, the errors come from off diagonal elements of your matrix. If everything was diagonal, you'd have zero errors. And so you need a method that subtracts off diagonal elements of your quark inverse. So there you go, and you'll have smaller air bars. You want your M inverse twiddle to be almost like M inverse. And so you can invent these methods. One thing you would hope is that your M inverse twiddle does, although it has off diagonal parts, that it does not have on diagonal parts. Why? Because then you affect the signal. But that's okay if you just add in those, you subtract it off those diagonal pieces. If you add those back in, then you'll have an unbiased method. One way you can do this is by deflation methods. You can construct any matrix from knowing it's eigenvalues and eigenvectors. And so what you can do, it turns out that nothing is determined by the low eigenvalues in the last QCD, except one thing, and that's air bars. Air bars are almost completely controlled by low eigenvalues and eigenvectors. But there's something important that happens. If you try to remove the eigenvalues and eigenvectors of this non-Hermitian thing, the quark matrix is non-Hermitian, then in fact you'll find that your air bars don't decrease, they increase. And there's a well-known thing that happens in, well, I'm going over time, that's not good. Well, there's a well-known thing that happens that numerical linear people know about, and that's called the highly non-normal problem. If you're working with Hermitian system, then all your eigenvalues and eigenvectors are perpendicular one another. But if you have a non-Hermitian system, that's not true, and often you have many, many, many pointing in the same direction, when you remove one of them, it is not removing the influence of that direction. And therefore you can have energy, you can have air bars increasing and not decreasing. However, if you Hermitize your matrix, and what that means for Wilson-like fermions, is you just multiply it by gamma five. It is Hermitian, if you deal with the low eigenvalues and eigenvectors of that system, then it does reduce your noise error bars, okay? In addition, if you use systems, GMR-SDR, the DR stands for deflated, restarted, these are techniques that my colleague at Baylor, Ron Morgan has originated. This is for general systems, so for non-Hermitian, Min-Rez is for Hermitian systems. What happens is these methods also help you speed up your matrix inverses. Here is a system, this is called the deflation knee, and it speeds things up. What's happening there is all the low eigenvalues and eigenvectors of the system have been deflated out, and then you get exponential convergence after that. After you've found those low eigenvalues and eigenvectors, you just reuse those. You actually over-converge on the first right-hand side, and that allows you to amortize that work that you've done over many right-hand sides and speed up your system. The nice thing about deflation is you get a double whammy from it. It gives eigenvalues and eigenvectors that are useful for reducing error bars. It also speeds up the calculation in a calculation sense. Here are some techniques that we've investigated for reducing noise error bars, for termative subtraction, eigenvalue subtraction, emission forced, eigenvalue subtraction, polynomial, and these are combined methods where we're combining perturbative with our Hermitian force. Our Hermitian force just refers to the deflation of the Hermitian system applied to the non-Hermitian system. But the best technique is this technique, polynomial subtraction. We invent polynomials up to some high order. It could be a thousand-order polynomial. That simulates the off-diagonal elements of your matrix. But we combine that with Hermitian deflation and get what we call HF poly. We do a reduction of error bar. This is our best method for reduction of error bars. Okay, so here's a little result. So this is a pretty big lattice, 24 cubed by 32. Here goes the relative standard error. And this is just from polynomial subtraction. We're at order 1,000 here for the polynomial. The polynomials are determined incredibly fast. You do one cycle of GM res and you get the Ritz values and you can form a stable polynomial from that. But then when you deflate, you get an extra kick to the system. The relative standard error is reduced. And here, your error bar is reduced to a tenth. That means that to do this with traditional methods, you'd need 100 times the computer time to do it. Because error bars only decrease like one over the square root of N where N is the number of noises. What happens if you do this for higher and higher polynomials, however, you'll find that these two curves come together. And the reason is because eventually the polynomial method at high enough order is also deflating. It's simulating the low eigenvalues of the system. And so additional deflation doesn't actually help. Additional explicit deflation doesn't actually help. Okay, but how do you complete it now? You've got an order 1,000 polynomial and you're supposed to take the trace of that. Well, you have the same problem as you had with M inverse. It's a very large system. You can't calculate it exactly. You have to simulate it. And we have what we call multi-level trace cascades. So here, I'm just using a traditional method, using noises, looking at a scalar operator, trying to isolate that, need a lot of noises, a lot of matrix vector products and a lot of time. Here, I'm subtracting a seventh order polynomial. Why am I choosing seventh order polynomial? Because at low enough orders, you can do a hopping parameter expansion of your low order polynomial and you can calculate these things exactly. Okay, okay. So that helps you. But the best thing to do is cascades. Let's say, so now we have an order 700 polynomial and I will calculate the trace of M inverse minus that order 700 polynomial. How many noises do I need to do in order to get below a threshold level 10? But then I add the number of noises necessary to get down to this seventh order polynomial. Again, that's because at that point, we can calculate things exactly. How many noises do you need for this? You need more, 350. But look at the times, they're reduced significantly. What we find is that for large systems, this is 12 cubed by 16, not very large. But for large lattice systems, you need more than two levels. We've been investigating some three level systems. Okay, so hopefully I'll have one more thing here. Hopefully, hopefully, hopefully. Thanks for your attention. And please consider how you can help the suffering people in Ukraine. Go re-crain. Okay, thanks, Walter. So we can go ahead for Q and A now. So let's start with the folks online. Does anybody online have a question? You can just raise your hand. I should be able to see it. Okay, how about in the room? Anybody in the room? Yeah, Sally. Yeah. Do you have the holly fitting and then you have HF holly? I think that's the one you're saying was right. And then I just said the number of orders in the polynomial increases they converge. So would the point of using HF holly meaning if you use fewer terms in your polynomial or that you have to find fewer terms? Could you repeat the question to the folks online? Yes, you can reduce the order of the polynomial. If you just use the polynomial method, you'd have to go out to order 10,000 to get down to the level that you get with the HF holly. So in other words, there's a window of applicability. There's a window where you get really nice reduction of error bars. You want to stay as low in polynomial order as you can. Because you're just introducing another problem, another trace that you have to evaluate. Yes. And so the way to do this and other groups are working on this as well is to do a multi-level evaluation. We call it a trace cascade. Again, you need to, it's either a two-level system or a three-level system. And it really works very well. So yeah. Okay. All right, other questions? Yeah. So we'll make to this question. At the beginning, you showed that very few people worked on a concept of showing them the spectrum of masses. What are the comparisons that you have used for the ability to get conditions? Well, you have chiral QCD predictions for these things. You don't necessarily have error bars on your chiral predictions. But it's the thing that is probably the most realistic prediction from QCD right now. You can also compare to external field calculations. Oh, there are plenty of measurements. Yeah, hundreds, hundreds of measurements. So, and there are other things you can do as well. You get these functions when you expand your comp and tensor out to second order in momentums. At third order in momentums, you get other things. You get things called generalized polarizabilities and you also get spin polarizabilities. And those are things that I'm working on with, frankly, trying to isolate those sorts of things. They're done in many places. I'm not an expert on that, but sometimes pretty big error bars. There's a substantial literature on all this. You'd be surprised or maybe not. I'm surprised at the depth of investigation of some of these things. So it's a pretty rich literature on those things. At least, yeah. Sure, yeah. So, but you say that if you look at the relation part of a little much back separation, you are not supposed to, it should not be here like some of the exponentials, it's supposed to perform in the quasi-continue. So why was the physical relation for that? Well, okay, assume the opposite that it is at large time separations that your correlation is falling like a single exponential. It'd be a single particle that's contributing. You just don't expect that for things like that. How masses, you'll have some continuum distribution of masses, I would say. Yes, but that's not how I think of it. Maybe that is an important way of thinking about it. An important way of thinking about it. Maybe you can imagine some effective masses as a function of time position or something like that. But it should be a continuum contribution. It should not be three or four exponentials that fit the whole thing, right? It's a continuum guy. So, and that's hard for lattice to do, right? It's everything is discretized. So at some point, the lattice is going to say, okay, everything is discrete, but hopefully you can do cuts on your correlation functions. You can recognize those things and parameterize them. A lot of these things are only understood after you do the numerics. I remember, for example, going to early lattice conferences where they were trying to get F pi or other quantities like that and they got terrible results. Eventually understood the systematics of the lattice and they could characterize it and they could avoid those problems. The problems are always things like lattice size issues, lattice length issues, lattice momentum issues, things like that. And it's very much like an experimental thing. You have a system, you need to make contact with reality in your laboratory, except now this laboratory is your numerical laboratory on the computer. Any other questions? Okay, I have one, Walter. Okay, yeah. So I'm sorry, I got a little lost here in the last part of this. So you were showing us that these polarizabilities can be extracted from these calculations. And then we talked about speeding computation and reducing errors and things like that. It's the punchline on the polarizabilities. Have you actually calculated? We're in the process. So we're in the process. Right, so that's as far as I can do right now. We have the paper. We're doing the calculation or at least it's being done at George Washington. Our formulas are the background. I think I mentioned to somebody that I'm actually doing what I call a shadow calculation where I'm doing a small scale calculation of what they're doing at GWU. So I told Paolo about that. So we can find out what the systematic problems are before they get to it in the big simulation. So how long does that calculation then take? Because it usually- It actually should not take very long at all. I mean, the actual calculation, but it has to be debugged. And that's what we're trying to do right now. One of the things that, but we have many, many tests now because we're using this conserved current and one of the things that we have to do up to numerical precision, when we sum over our currents, we have to have a steady signal. Not just steady, numerically determined, correct to numerical precision. And the numerical precision will come not only from your analysis programs, but before you did that, you did a whole bunch of quark inverses. These things are only done up to a numerical order that's determined by residual vectors. And so you're testing your whole system by finding out whether things like your currents are conserved, you can't do that with non-local operators. You'll always get airbars. Those airbars could come from a bug in your program. We simply won't have that. We'll know whether we're spot on or not by using the conserved current. So yeah, and so I actually have kept my programs from long ago when I did these calculations, I did them at NCSA. And all my programs are completely utterly debug. I mean, just totally. And that's why we can, I can do this. Yes, yes, yes. And so that's why I can do the shadow calculation because all my programs are debugged and running. They only run on one processor though. But the thing is, the thing is that I do have many MPI programs, lattice programs. Our package is called QQCD. I work with Randy Lewis who's at York University. And there's a special technique you have to use called sequential source technique that's not in his programs, but are in my programs. So I can put my SST in his MPI program and then I'll have a running fast program to do corp propagators. So that's the plan. All right, well, let's thank the speaker one more time. Yeah, thank you. Thank you.