 So, first of all, let me just say thank you very much to the organisers, to Anton and Maxime for the invitation to speak here today and to attend this really marvellous conference this weekend in honour of Samson. So I've known Samson as a colleague and a friend for nearly 20 years now in Trinity. So in that time, we've had many interesting physics discussions I think, but we've also fought many battles together with our university administration or against our university administration I should say. So in that sense, we have worked very closely together, even if not as collaborators in physics topics. So I work in lattice QCD, so a little bit tangential I suppose to the main topic and thrust of this workshop, although of course still within the theme of a quantum field theory and somehow still, if you stretch it a little bit within the theme of integrability, although here we're talking about numerical integration, so it's a little bit different I grant you. Okay, so let's see if I can get this to work. Okay, so just to start, I found some pictures from Samson's nearly 20 years in Trinity. So Leon already showed this one this morning, so this is Samson in his academic robes in front square with people that of course you recognize, but this one you won't maybe recognize the people. This is Samson on the occasion of the award of his gold medal from the Royal Irish Academy for his contributions to science. This woman in the middle here is Mary Macalese, who's a president of Ireland and awarded the medal at the time. And I have to confess, I have no idea who that person is. So I apologize. Maybe, maybe. So this is, okay, so none knows far better than I do already. So these are some highlights of course from Samson's time in Trinity, but I should also point out that Samson lives on campus as many of you know, he's very warm and hospitable and I've had the occasion, his apartment is on this side over here somewhere. So I've been invited many times by Samson and Masha to spend an evening with them and then we've listened to good music, we've discussed somewhat esoteric literature lubricated always by some very nice beverages. But I also found this one, so this building is the oldest surviving building on the Trinity main campus, it's called the rubrics where Samson lives. And so there's some history which I discovered which I don't know whether you already knew this Samson or not, but in 1932 a group of students who became particularly annoyed with a fellow of which Samson of course also is of Trinity apparently did what all rational types do, so this is from history of the time, gathered a mob outside his window and shot him. So apparently nobody was arrested and held responsible for this particular murder because they were all in costume and they weren't identifiable, however the group of students was expelled. I think that's right, yes, yes. So it's much more peaceful these days at least normally. So as I said already as advertiser I will talk on lattice QCD, I will tell you a little bit about the methods that we use to evaluate observables in this quantum field theory and try to give you a sense of some of the progress that's being made and the challenges that remain in the topic. So this is a cartoon of the phase diagram. So to understand fully QCD one should of course any theory one should try to map out the phase diagram and so this is the phase diagram in the chemical potential, the baryon chemical potential and the temperature and what we know is in fact quite limited or at least what we know from first principles is quite limited. It's certainly limited as I'll discuss in a little bit more detail soon, it's limited in particular to this temperature axis here. So we understand quite well how to formulate the theory and how in a discrete and numerical simulation to calculate observables and to properties of the theory at zero chemical potential. So as you go out in chemical potential as I'll allude in a little while you'll see that this becomes an extremely difficult problem, in fact it's an unsolved problem to understand how to make calculations at finite chemical potential. In Brooklyn National Lab we're building electron arc collider in a place. Yes so the EIC, so this is Rick, this is Rick is up here doing an energy scan and EIC will certainly be probing higher energies and higher temperatures. So there's a lot of interest from the Lattice community for EIC physics for sure, absolutely. So one thing that, so in Lattice calculations as I said you can formulate the theory in a thermal field theory background and you can make calculations at finite temperature, in fact that's done although it comes with particular challenges in its own right. And you can if you're brave you can try to venture out a little bit into this non-zero chemical potential plane and those people who have done this have been able to at least make some baby steps in this direction and you see those lines dotted there and then established that the transition here is a crossover for example, so this was a contribution from Lattice calculations but you see that there's a rather large amount of the phase diagram that remains at least for the moment inaccessible to us. So what can we do? So the QCD of course is quantum field theory in fundamental variables which are the quarks and gluons and is a building block of the standard model which whose parameters you can see gathered here. So just a few short words on the standard model I won't say very much about this except of course to remind you that this is embarrassingly successful and despite huge efforts certainly from the experimental community this in some sense is not broken yet and in fact this motivates QCD people even more because the precision tests so new physics has not been immediately or directly accessed yet in experiments and so looking for signs of new physics indirectly through precision tests of standard model variables is more important than ever and in fact QCD plays a role in those precision tests. It also remains the only experimentally studied strongly interaction quantum field theory so one gets to learn quite a lot about such strongly acting theories in a regime where at least in principle there are some benchmarks and experimental tests that one can check your results against. So it provides us with a paradigm for strongly interacting theories for example in beyond the standard model physics and I should also say it's important to note that this is not in some sense a solved problem there are still very many puzzles and surprises that we don't understand even though in some sense we we know a lot about the theory. So just to highlight some progress so this is a this is a typical if you if you have been to colloquia in particle physics either theory or experiment phenomenology you'll see plots that look like this these are the parameters of the CKM matrix and so they parameterize the sort of the known standard model if the standard model is complete as a unitary theory then the triangle that you see here meets at the apex the lines meet at the apex and the error bands the colored bands give you a sense of the uncertainty and the parameters that go into making up this triangle so you see that the bands are certainly wide enough here that it's not yet fully proven that indeed this is unitary or indeed fully proven that it's not unitary the bands are the combination of experimental and theoretical uncertainty and by and large most of the bands that you see here are driven by hadronic uncertainty in particular theoretical uncertainty that comes from the QCD contribution to this combination of electroweak and strong interacting physics so it is the job of QCD theorists to to shrink the size of these bands to understand more about this CKM matrix in particular the unitary or not of the CKM matrix and over here these rather apologize this is a little bit small and and certainly hard to read on the axis so don't worry about that so this is a sort of a universality plot in some sense this is a collection of lattice calculations of many low-lying mesons and baryons the different the black lines first of all are the experimental measurements the different colored dots are different lattice calculations so these are done in many different ways the fact that they all sit on these black lines is confirmation that although we do things in very different ways everything is agreeing very nicely with these low-lying measured experimental states and this is true for light these are like mesons and baryons heavy light mesons and heavy heavy mesons and baryons there's some offset here which I haven't included on the plot so everything works very nicely at least for this this level there are also it you need rather good eyes but there are also some arrow bars on here and this is the result of the numerical simulation of course which has both statistical and systematic uncertainties that go along with it so for QCD we know that essentially you can think about two regimes in QCD so there's a high energy regime so deep inside a proton where the quarks are essentially free with asymptotic freedom tells us that the theory is perturbative and in fact perturbation theory works very well in that regime however we also know that the coupling the strong coupling QCD runs and so at length scales that are of order the proton for example so at lambda QCD is about 300 MeV the coupling which you can see I mean by some sort of lazy inversion which I've done here so you just reflect this plot so this is in plotted this is the running coupling as a function of the momentum transfer so Q squared if you just flip this reflect this plot around and just relabel the axes in terms of the corresponding distances so Fermi you see that already by about 0.2 Fermi the strong coupling is really strong so it's about 0.5 and by the time you get to 1 Fermi which is sort of hydronic length scales and this is a quite a large number and is certainly no longer an appropriate variable for a perturbative expansion so a non-perturbative expansion a non-perturbative treatment of the theories or is obviously required if one wants to understand the low energy Hadron spectrum the emergent phenomena in terms of the quarks and gluons the fundamental variables okay so why would you think about lattice QCD so the lattice discretization approach is a systematically improvable non-perturbative formulation of QCD so the lattice provides a regulator in the UV also in IR in principle one can make calculations that are arbitrary precise there are of course reasons why this isn't true in practically true but we can understand in a formal way how to remove the uncertainties in these calculations and of course it starts from first principles from the QCD Lagrangian so this is not a model approach and the only inputs are the quark masses and the coupling one can turn that around and say that you can actually use the inputs then to explore the mass dependence and the coupling dependence in a very nice way an avenue for example that's not open to experimentalist since these things are given by nature but here we get to dial them up and down as we want so what do we actually do so as I said already we start from the Lagrangian you know I haven't included the theta term here so this is the QCD Lagrangian the cartoon over here is to show you how how this might work so the gluons are the are actually three matrices these are you think of them as the links and the hypercube these are parallel transporters so the potential the vector potential here is rewritten as this link variable in this way through exponentiation like this this is a path ordered product of these of these link variables and here the quark fields then sit on the sides between the links so they have they carry all of the relevant indices so they have color and flavor indices Dirac indices and precisely how you do this how you visual how you implement this these quark fields on the sites here in this cartoon are there are many different ways to do this there are pros and cons for all of these Wilson of course as the father of the theory wrote down this formulation for Wilson fermions in the first instance these Wilson fermions fermions explicitly break chiral symmetry and so some of these other approaches are different formulations that are an attempt to rewrite the quark fields with remnant chiral symmetry or indeed in some cases full chiral symmetry but they have other disadvantages that in some cases make you favor Wilson fermions the point is that if you however you do this this discretization once you remove the latter so once you take the continuum limit and recover the continuum underlying QCD theory then these discretization methods should all agree and that's what you saw in the earlier plot where all of these different colored dots were from different formulations along these lines and finally agree the usual numerical thing is to replace derivatives with finite differences in this case with the link variables ensuring that you have covariant derivatives in the simulation so that's essentially the recipe and then you solve the path integral on this finite lattice so a is typically the lattice spacing so this is non-zero this is done stochastically using Monte Carlo techniques the final twist in this tail if you like is that this can only be done effectively in a Euclidean space-time metric so I'll say a little bit about that in a minute where you have essentially a probability weight that you can use in these stochastic estimations but the observable that you might be interested in calculating is essentially given by this path integral here so it's the evaluation of this right-hand side object that we do numerically Good question and see the term do you have a way to put it on the lattice? See the term It's a very good question yes and no is the answer it maybe I'll come back down mention it when I go when I go ahead it it essentially me it introduces a sign term if you know what that is and so this basically means that these Monte Carlo techniques don't really work very well and so you have to find some other yes exactly so you have to find some other way to do that now there are ideas about how you might do that for particular choices of theta but not for general choices of theta and I should also say so if you look in theories other than QCD for example so in SU2 theories we can do this with we can understand algorithms that allow you to introduce a theta term for example but just not in this full theory okay so the observables that you want are determined from the path integral something that looks like this and so some subtleties here the corkfield's sci and sci bar are grassman objects and so we these are anti-commuting variables and computers don't like anti-commuting variables you can't put these sorts of numbers these sorts of objects in a numerical simulation and so what we do is to integrate them out by integrating them out you introduce a determinant of the fermion matrix so this is m here I've assumed that I have two flavors and those two flavors have the same mass so this is true if you think about up and down quarks the lightest quarks in the in the QCD vacuum it allows me to write it down in a rather nice way like this I should say that we also understand how to do additional quark flavors so including the c quark in this determinant and also even the charm quark as well now but this is a rather nice way of writing it down in this particular case for two flavors with where the two flavors have the same mass so up and down so it is this entire object actually that is determined by important sampling so the combination of this determinant that you see here with this gauge action in the exponent can be interpreted as a probability weight so to see that here the determinant is exponentiated by introducing some auxiliary bosonic fields and the whole thing is then rewritten as as an exponent introducing also some conjugate momenta for the link variables so it gets a little bit complicated but in the end you have something here which is positive definite and can therefore be used in important sampling so that's the key step to enable you to do that the again just to quickly mention so this this this part here so this determinant piece here that we've introduced because of the grassman variables integrating these out and this determinant piece introduces non-locality so that it couples the gauge fields in a non-local way in this in this calculation which means if from a numerical point of view you can't do if you remember back to any computational classes you did for the easing model where you do metropolis and hastings and all of these nice algorithms these rely on locality and so instead here you have to in fact it was recognized pretty early on so that is people designed a new algorithm called a hybrid Monte Carlo algorithm to handle this non-locality so there's some technical linear algebra that's basically hidden by what you see here m's the fermion matrix so d slash plus the quark mass essentially okay so if you do this so you evaluate this in some stochastic way again for these important sampling you have to demonstrate detailed balance and ergodicity in your algorithms but anyway if you do that then the object that you're still interested in here which is this expectation value of some observable O can be written as the just the average of the object that you're interested in so this observable evaluated on each of the stochastic samples that you've taken in the limit where you take a large number of these things and so you can show that this is again this is rigorously true with some correction terms that are understood in the limit where n is not when n is finite but not infinite and in particular you can show that that uncertainty goes like one over the number of these snapshots that you're willing to generate so we have a controllable improvable statistical uncertainty in these Monte Carlo simulations calculating this piece that I wrote up here actually is what lattice people spend an awful lot of their cycles doing so this takes about 80 percent of the compute cycles in the whole generation process just doing this determinant piece that you see here so if some people here may remember that in the early days of lattice calculations people talked a lot about the quenched approximation that essentially sets this entire piece here to one and throws it away so you don't include quark annihilation diagrams in the simulation but that's completely we've moved on from there now so this is no longer a part of any calculation anymore so this most calculations are on quenched these days so the sign problem now is under control no no no no so here this is assuming everything here so qcd here means qcd with zero chemical potential so I haven't talked about the sign problem at all what the sign what the chemical potential does is is to complexify this determinant piece and then we don't know what to do because then the problem essentially is is an algorithmic one not a philosophical one or physics problem the problem is that this is complexified and now you have no positive definite weight for your Monte Carlo sampling so the answer is no no I mean there is progress to the extent that you can approximate small values of the chemical potential but that's not a that's not an algorithm that's just an attempt to understand what's happening as you move a little bit out of the plane so no absolutely it's an open problem so everything that I talk about here is in terms of zero chemical potential okay so the objects again that we're interested in are these let's say for example for mesons we're interested in two-point correlation functions we build these out of operators so this is psi bar psi with some Dirac structure in here to give you the object that you're interested in so this gamma is an element of the Dirac algebra but you can also include rather fancy versions of displacements so you can have a quark and an anti quark sitting at the same site or you can pull them apart or you can do fancier parts like this and so this is what these operators would look like for these different scenarios so you're completely free to do this and all of these will have particular quantum numbers that they overlap with and these all form part of sort of a calculation that we do so there's an immense freedom to design operators in this way so you don't just have to sit with everything sitting on the same site the correlation function the two-point correlation function then is the expectation value of this creation and annihilation operator which in terms of the quark field so for some flavor indices which I've just written here is a and b I've suppressed the other indices just for clarity so you have psi bar psi at some value x psi bar psi at zero and this object is what you want to evaluate for each of the stochastic ensembles that you've already generated so to make some progress here we use wix theorem to contract the quark fields that you see in this operator in this correlation function so by contracting the quark fields we replace them with quark propagators and so you see this is the same capital M so this is the now the inverse of the fermion matrix or d slash plus m appearing in here for some flavor that you get to choose so a or b in this case and the trace is over spin and color indices so for non-flavor singlets when a and b are different this simplifies to this expression here however when a and b are the same you can see that you these this term here in particular remains one thing just to to maybe emphasize here is that the fermions enter in two different places in these lattice calculations so they enter in the previous slide where I was talking about the determinant and the evaluation of the determinant and then they enter again here what we talk about propagators so here for these correlation functions which are built out of quark fields which can be of any flavor okay so then to actually evaluate this again this correlation function we in principle or in practice what we do is to build many of these operators using the cartoon that I showed you a little while ago as a sort of as a motivation so you can imagine having a quark and an anti quark on the same side pulling them apart doing random different paths like that with them and you build up a matrix of these operators and again in this euclideanized field theory you can show that this correlation function that you have written down essentially has this structure here so it's a tower of exponentials with some stuff some amplitude in front which for the moment we won't worry too much about the main thing to note here is that you see there's an exponential which has the energy that you might be interested in upstairs and so if you with a factor of t and so if you were interested in the ground state so the lowest lying state in this particular channel then you just have to wait long enough and then the ground state exponential the lowest exponential will dominate in the correlation function if you are interested in other things beyond the ground state then we solve a generalized eigenvalue problem with this matrix of correlators and by diagonalizing that correlator matrix you can extract eigenvalues and the eigenvectors and the eigenvalues are related to the energies again it's the same energies that you see here but in a more straightforward way so you can each of the eigenvalues is then associated with one of the energies in the this tower that you have here and the eigenvectors are related to these overlap or these amplitude factors that you see in front here so there's been a huge amount of work in the field actually to understand how that this how this actually works so there's a number of different issues which I haven't which I'm glossing over here one is how do you actually construct in some sensible way these matrices of operators the second is to understand what the uncertainty is in this generalized eigenvalue problem in this solution so how do the energy eigenvalues relate how do the eigenvalues relate to the energies that you're interested in and with some correction factor and then the final piece of work that has only recently happened is to understand how to use these overlaps previously we just sort of threw them away and didn't worry about them is to use these overlaps in fact to help you to identify states in the lattice calculation okay so very often you hear a lot of people in fact I've already been doing it talking about oh you know we need big computers and everything takes a long time so so this slide is just an attempt to sort of say why or show you at least what's happening so this is a the computational cost of some lattice calculation in particular of the part where we're calculating the determinant so there's a couple of slides ago so of course it depends on the number of these stochastic samples that you're going to calculate but that's actually not the problem the problem is over here when you start to think about how it scales with some of the things that you control okay so you control for example the size of the box in which you're willing to do your simulation but the cost scales like the volume to well l to the power five so the size of the box with a fifth power it also scales with the lattice spacing again that's not unexpected but it scales rather badly with the lattice spacing so it scales with this seventh power here a inverse a is the lattice spacing remember so as you decrease the lattice spacing which is moving towards the continuum theory the cost is growing by a rather enormous factor and finally the cost also grows with so m pi here the pion mass is a proxy for the cork mass in the determinant and you see again that this as you go towards physical to very light values of the cork mass which is like physical pion masses this scales again with a very uncomfortable factor of six upstairs the combination of this scaling which is built from some of it is certainly physics so m pi for example scales like the cork mass square root of cork mass but some of it is algorithmic and some of it is geometric scaling but the combination is essentially what the the 80 percent of the computational time that these lattice calculations are doing where it's where it's going to this has a name so this was called the berlin wall mostly because it was discussed at a lattice conference in berlin so it's it's a little bit dull in that sense but actually the wall was the was the wall that you hit in this computational cost as you try to scale down in to light pions large boxes and small latter spacings so why is it over eight to the fourth the correct scaling it shall go rhythmic so there's a so there's a there's a there's an additional factor again partly because of the nonlocality in the determinant so that affects sort of everything I mean it brings additional the way you handle the determinant brings additional factors of a and l into this excuse me why why the combination of the parameters is dimension full and no so sorry the c yeah so the c here is some parameter that is will have dimensions to yeah to sorry I should have said that okay so so this is the berlin wall essentially I'll show you in a little while that the wall has come down and in fact we at least understand how to to beat some of these how to beat some of these uncomfortable factors in this in this equation and however it doesn't actually stop at that point so once you do all of this work up here then you have to do the calculation of the correlation functions which again requires you to invert this fermion matrix as you saw a couple of slides ago and so you have some things that you need to worry about you need a box that's big enough for whatever it is you're trying to calculate for some beyond the you know larger than at least one for me much larger usually these days and you need to have a latter spacing which is much smaller than the volume of the box that you're simulating so a typical sort of volume might be 32 to the four that's already over a million sites and of course your fermion fields have direct components colors and then the combination finally in this direct matrix is usually or easily sorry this sort of size so this is the matrix that you have to then invert on each of the configurations that you generated using at this cast up here so the whole thing is an enormous hugely intensive computational cast but it's also highly parallelizable so it's very suitable for these sorts of machines this is Titan in the US where a lot of the calculations have been done so it scales very well on these large machines and just to prove it's been around for a long time so here's a snapshot of QCD from the early days lattice QCD from the very first numerical simulation so everything was quenched you can see the boxes were relatively small so we're up here and then the machines are somehow keeping track of all of this as well so the boxes get bigger as the machines get bigger and here you see that the box now is about six Fermi and is rather large number of sites and this is at the stage of the what they call the fifth generation or a blue gene IBM's blue gene machine so more recently this is Summit this is in the US this is an almost exoscale machine so this runs at a few hundred petaflops where a petaflop is about 10 to the 16 floating point operations per second so this is at least last year was the top was the fastest supercomputer in the world it's also one of the greenest supercomputers so that's also quite nice no no seriously this is a big thing now so these machines have to be not just fast but green as well they have to be energy efficient because they use an enormous amount of power and then this one here in a box it doesn't quite exist yet this is coming in 2021 this is the first exoscale machine this will be doing 10 to the 18 flops so floating point operations per second and to go from here so this is about 200 petaflops to over a thousand petaflops in three years is the ambition and it's really rather significant to try to imagine how to to make that transition in this in these sort of numerical calculations the final goal is to compute some matrix element or what's the for Lantus people yeah um so maybe i'll say at the end but but yeah so i mean we've we've moved long beyond just reproducing what experiment does um that was that's already a sort of you know i would say proven um so matrix elements are already calculated and they were part of for example that plot that i showed you for the ckm matrix element but there are many other similar quantities for example contributions to g-2 the anomalous magnetic moment of the of the muon where they're the matrix elements are rather complicated objects they have a light by light structure in them and they're they're extremely difficult to calculate they have many they have vacuum polarization diagrams as well so this is a very difficult problem and this is i'd say at the early lattices at the early days of these and i'll show you some other things we're at the early stages of trying to understand how to make the calculations so these of course the bigger computers help um but as you saw maybe in the earlier slides you also have to be quite clever about how you do the algorithms because if you just wait for these big machines then um you won't really make very much progress you have to also improve the algorithms as well um okay so this is a long list of um some of lattice collaborations which is not important really but just to show you that this is the berlin wall coming down although maybe not as so obviously so here's the physical point this is the pion mass and this is the lattice spacing here so you want to be down here right this is the pion mass in mev at zero lattice spacing and the lattice calculations are all up here so of course they're not at finite lattice spacing which would be down here so they're still at fine but not but not close to zero yet um but they are at physical pion masses so i'm not sure if it's so clear but these these there are certainly some points which are very close to the pion mass there are some points which are at the pion mass and there are some points which are actually even lower than the pion mass because again you can dial this number up and down in your simulation if you want to um okay so what can we actually do so this is a little bit maybe towards your question um so you can think about lattice calculations in maybe three different boxes so here's two of them at least so below threshold strong decay thresholds uh everything behaves nicely you have bound states you can use well tried well tested everything is works very nicely everything typically you have high statistics uh highly improved actions everything is very precise you get rid of all of the systematic uncertainties you take continuum extrapolations you take volume extrapolations you work at physical like what you sort of take care of everything so this really is um these are as I said the gold plated things and and even to the extent these days where you can include QED really put QED and non-compact QED in the simulation as well um above thresholds then life is a little bit more complicated of course so here we need to have many more sophisticated and um different operators than we've had in the past and there's been some developments there typically this is a sort of a little bit like seat of the pants stuff you're trying things out you're trying to understand new methods and how they work to get you to understand so that you can understand resonance physics typically there's no continuum extrapolation everything is done with non-physical non-physical pion masses and of course you make try to make improvements as you learn how to do these calculations but they're rather different to what's going on here and then there's this piece where we have the sign problem for finite density calculations where Monte Carlo really is invalid and there are at least at the moment no good ideas for how to make progress there and a second rather interesting area to think about is how you understand scattering at finite temperature so as the at finite temperature the box we have a Matsubara formulation so that you identify the temporal direction with the temperature and so you change the temperature by squeezing the box essentially and understanding how you do scattering in that scenario is um again terra incognita so here new ideas that we've already had in some sense so new ideas have proved absolutely crucial here so making progress in the threshold in resonance physics has not been because the computers have got better or bigger or faster it's because we have much better algorithms that actually are tuned to allow us to do these sorts of calculations so distillation is a beautiful example so this had its 10th birthday in Dublin it was invented in Dublin so we celebrated it last year it gave access to a whole range of exotic physics that hadn't previously been accessible and set up this whole field of scattering physics in lattice calculations the temperature what's the I mean the real temperature which are necessary in QCD what's what it is like going to Kelvin oh so um what the so the phase transition is at about it's about 157 MeV so I yeah yeah so so we can go from essentially it's not quite zero but close to zero temperature all the way up past the phase transition and to maybe about two TC in these not as calculations it's like it's 10 to the 10 Kelvin or so yeah yeah well one every 10 to the four four yeah thank you much so this is you know cosmological temperature yes well yeah I mean these so that the calculations that most of the calculations that I'm talking about are at low temperature so they're well below that transition temperature but then as you go up that's right okay so let me show you some spectroscopy then so these are the familiar things that you would want to talk about so there are mesons and baryons built from quarks and anti-corks in the usual way but QCD of course doesn't isn't restricted to these combinations so you can build hadronic objects observables in many different ways including from glue balls and from gluons gluon feels only you can have combinations of quarks and anti-corks with excited glu so these are hybrids tetra quark objects and molecular objects where you have these tightly bound structures within some sort of looser molecular object I'm sure there's a joke in there somewhere but yeah they do actually yes maybe maybe yeah well you'll have to tell me that I obviously can't comment on that okay so now this is a little bit technical as well but just to give you a sense of the things that you have to think about again in these lattice calculations so you know of course we work in a finite box in a finite grid spacing so so sorry these the graphics have come out a little bit funny but this is the picture that you should think about so if you wanted to reach the continuum then you need to understand how you relate how you change the box size so you go in for example from a rather coarse lattice to finer and finer but then you also want to take an infinite volume limit where you keep the lattice spacing the grid size the same but you increase the box size so those are the two limits the relevant limits one can sit at finite values of the lattice spacing and use for example semantic improvement procedures to reduce the size of the discretization uncertainty by introducing irrelevant operators at finite lattice spacing and that's again a well well trodden path in these lattice calculations but ultimately you want to demonstrate that you can take this continuum so of course it means that you have to repeat the calculation many times but it's necessary for these precision physics that I talked about a little bit okay so maybe more interestingly you can you would like to simulate at the physically relevant quark masses this comes at a computational cost so I mentioned at least alluded to the light quark cost where the light quarks appear in the fermion determinant and so in fact the condition number you have to invert that matrix and the condition number of that matrix scales with the the light quark mass so as you reduce the light quark mass the condition number of the matrix that you're trying to invert gets larger and larger and quickly becomes a very difficult problem so that's an algorithmic issue with light quark simulations for heavy quarks those heavy quarks have a scale again with a have discretization uncertainties that are proportional to the lattice spacing that's true in the light quark sector two but a times a small number is not so bad but a times a big number if it's a bottom quark is a problem and can quickly swamp your calculation so you have to understand the heavy quark systematics which is often done through effective field theory methods like non-relativistic QCD or even static approximations for example. The last relevant point here I've already talked about the working in Euclidean time but the Lorentz symmetry is of course broken at finite lattice spacing so that means that the relevant symmetry group to classify for example hydronic objects in terms of the irreducible representations is not the continuum group but instead is the symmetry group of the cube so OH or O3 and we have to define operators and identify states according to the symmetry group of the cube rather than the symmetry group of the continuum and then there's a piece of work that allows you that you have to do to understand how to go from here back to here essentially. So again if you essentially the problem here is that here you have an infinite tower of spin values which correspond to the irreducible representations of this continuum group but the cubic group has five irreducible representations and so all of those spin values all of the j values are mapped onto the five irreducible representations that characterize this symmetry group and so you have to understand how to disentangle all of those degeneracies so that you can relate a state on the lattice to a state in the continuum so it can be tricky. And then this I already said we're working in Euclidean time so that we have this positive definite weight and can do these Monte Carlo calculations and this also gives you access allows you to to write the correlation function as this sums of exponentials but it also means that you lose direct access to scattering matrix elements in a Euclidean field theory so this is of course well known but there is an indirect approach to this now through the work by Luscher and others that followed. This is only if you're gauge fixing right but this is a gauge invariant theory so you don't need to gauge fix. No no so you only need to gauge fix if for example you're interested in the gluon propagator if you're interested in the gluon propagator then you have to gauge fix and there you have to be careful about Grieboff copies for example. In fact so one of my thesis projects was on an algorithm to specifically avoid these Grieboff copies in gate lattice calculations. If you're doing that then you have to gauge fix yes so for gluon propagators for things like that then yes indeed you have to gauge fix but typically all of this is gauge invariant right and so when it's gauge invariant you don't need to worry about that. If you say you have to solve or all the possible you configurations and there's a huge redundancy so how do you avoid summing over the same things? Maybe so say that again sorry I don't know. Given configurations of use maybe gauge equivalent to another configuration of use. We are over counting the same copies. The variables the gauge group is a factor. You're summing over use right over all the possible link variables but the space of link variable so the huge redundancy. No no no but you're stochastically sampling so maybe you can cut out the huge number of things that you're over counting. Well but I mean so yeah maybe you want to only sample over gauge non-covenant link configurations probably when they divided by partition. Yeah but it yeah so but I think it's I'm not quite sure it's the same problem because in the Monte Carlo calculation you're not quite summing over all the gauge link configurations you're it's weighted don't forget with a probability distribution so you're you're sampling from that weighted set of of link variables so I think it's probably not quite the same but I'm not yeah I'm not sure. Okay so actually Gregory mentioned glueball so I left I'm glad I left this in but anyway so this is an early success of all of these lattice ideas so here are lots of different operators in this Young Mills theory and you see indeed that you can extract these very nice spectrum of states this is from a very long time ago now this is actually even a quench calculation so it's quite ancient history but still somehow relevant. This I won't dwell on because it hasn't come out very well so let me show you this one instead so this is a more recent work where we can then determine what an exotic meson might look like this is with the cc bar contents or charmonium states so here's a regular cc bar and here is a sort of a cartoon of a hybrid where you have some excited gluonic content in the between the q and the q bar and so what you see here is a rather busy plot but first of all the black lines are what we know experimentally so you see that you know some things experimentally but actually you don't know very much experimentally and then the rest are the results of lattice calculations so the green boxes are simple qq bar type objects and then the colored in either red or in blue are states that we identify as hybrids so these states that appear if you again if you know your qcd nomenclature you'll see that these are regular quark model states with jpc values but also an explicitly exotic channel which is this one minus plus and is not allowed in quark models but is allowed by qcd for example so you can calculate what a state would look like in that particular exotic channel and you see that there's a plethora of these things which you can calculate in the lattice which await some experimental input now to understand whether they're real or or what the values of these things are or indeed what the structure of them is again there are some faded lines here which are indicating the thresholds the thresholds for strong decay in charmonium to dd bar type states but in this sort of naive approach we've treated everything as as a bound state so here are some current challenges yep nearly done so if i wanted to go beyond so here let me just again just say so everything here is the assumption is it looks like this so it is sort of qq bar in some bound state configuration like this you might put some exotic things in between the q and q bar but i don't worry about the fact that i'm sitting above decay thresholds or i don't allow that to affect the calculation of course that's not actually reasonable everything is more or less all states are resonances and so what we need to understand is how to to calculate these spectroscopy this spectroscopy in the presence of or allowing for resonances and scattering states and i mentioned previously that this euclidean theory essentially you lose access directly to the scattering matrix element and so the solution to this is in fact found initially by lucia who was written down in 1991 there was earlier work by others in non-relativistic quark models for example but lucia was the first to formulate this at least for a qcd application and so what he what he showed is that you can relate you can turn the fact that you live in this finite box with the discrete spectrum of energy levels to your advantage and relate the information that you extract in the lattice calculation which are essentially these energy levels at some particular box size l or momenta p where that's the overall momentum of the system you can relate that to phase information the phase shift in infinite volume so this relationship is through this function phi which is some generalized zeta function which you wrote down practically what that means is that you calculate lots of energy levels so lots of these energy levels at lots of different values of l and the more points you have you use this formula then to allow you to translate from these energy levels to the phase shift and so you see that the more points that you have in this calculation the more points you have in this phase shift and so you map it out much more cleanly so that's the the basic idea you might wonder why it's taken so long to actually do this so from 1991 nobody did anything for a very long time so people tried at the beginning in the early 90s to do something and it didn't work at all and then nothing happened until about 10 years ago and so about 10 years ago this distillation algorithm actually opened the door to this by allowing the determination of these energy levels in a very precise and rather beautiful way which meant that you could finally implement this lusur method so that's been a real growth area so there's lots of I picked two particular examples which are interesting at least as the lattice person they're interesting so this is a an isoscalar in pi pi scattering it's a little bit tricky to see here but again so I apologize so these are two different quark masses so remember that we have complete freedom to dial the quark mass up and down and what you see here is that so this is the heavier quark mass and this is the lighter quark mass you see rather different behavior for this particular state depending on the quark mass so we see this is the sigma it evolves from some bound state to a resonance at lighter masses so this is what these pictures are showing you but you see that you can understand some of the resonance physics by just changing the quark mass in a very nice way and these are some of the wick contractions that you have to do but that's less interesting perhaps and so here is the row resonance something that everybody has seen hopefully before this is a very well-determined thing experimentally of course and here again you see the effect of different quark masses so you can change the quark mass and see the row resonance really appear in the calculation so of course this is interesting to see when you know the answer it's even more interesting when you don't really know what's happening to see how the resonance behavior depends as a function of the quark mass when you change the quark masses you keep the cognizant value constant or you also move that um yeah yeah so that also moves yeah no sorry yes it does it also moves yes although it doesn't play i'm not sure what role it would play here well in row it will play in proton it will you know play even more so yeah i'm not sure so it definitely changes but uh i'm not sure it's been tracked in some in any systematic way as the condensate itself hasn't been tracked in yeah no it's a good point actually yeah okay so let me just finish then with things that we might do and um and then finally things that um are more speculative so here's something that i think is reasonable to imagine doing given the slide that i just showed you where we can understand how to do scattering this is nothing to do with the lattice calculation this is the set of experimentally measured and or expected masses in the charmonium system which it is assumed is reasonably well understood through potential models for example and so the different colors are the yellow are things that are established and measured predicted by theory and measured white boxes are those which have been predicted in some quark model calculations but aren't discovered yet and everything else is new undiscovered unexpected and to some extent possibly unwanted so these were discovered in experiments from about 2003 onwards by bell then best then lhc um they are sitting above strong decay thresholds it's completely unclear what they are so one might imagine that they could be some molecular states even some tetra quark states are postulated but interestingly most of them are also extremely narrow which would mean that they're not necessarily a loose sort of conglomeration of of um die quarks for example but rather have some much more tightly bound structure that is not uh what you might expect above the decay thresholds so this is completely unknown um at least at the moment we don't understand what these states are this is a perfect place for a lattice calculation to actually make some progress is to understand what these are and these are a finite temperature calculation um so here what we looked at was the bottomonium spectrum so bb bar spectrum as you dial up the temperature so this is t over tc and you can see in each pane as you move from left to right and top to bottom the temperature is increasing from t over tc is about 0.4 uh to 1 and then up to uh to higher and higher values of tc until you have t over tc is about 2 and you see that the the main points of takeaway here are that the um the peak which is the ground state here is uh is um shrinking and broadening which is um commensurate with melting and suppression of the state in the in the medium in the coaxial plasma um what we don't understand at all is how for example such a state might scatter how do you understand um if this is a resonance not a bound state how this behaves in the plasma so these are completely open questions um very interesting questions there are a lot of subtleties here about recombination effects from melting and how that feeds into scattering calculations um which again are only beginning to be discussed by um in in a lattice setup so since uh coffee and i didn't know i'm standing between you and and coffee so um maybe let me just finish with um with some perspective so i split this into um along the lines of computing because lattice people are always expected to talk about computers so i thought i would say something about the the two sorts of computing that i see on the horizon um we are sitting somehow at an amazing point where computing is really radically changing um both classically and in terms of quantum computing so at a classical level this excess scale computing that we'll see within two years is absolutely not business as usual so we cannot just take the code we have put it on the big machine and press go it will not work nothing will work we need new algorithms the workflows will be different everything is going to be very different on these machines so this is um of course a huge challenge but it brings with it many many exciting possibilities to do of course more precision determinations of matrix elements to calculate matrix elements that we simply cannot do at the moment um and to understand for example this spectral reconstruction problem that i showed you a little bit in finite temperature just a while ago uh which is currently limited because it's an ill posed problem so this is a transformation from um how you go from a an object where you have a limited number of discrete points to a continuum object which is the spectral function for these finite temperature states so all of this is somehow um i think a really exciting time in just from a classical perspective and since again everybody always mentions these words these days so lattice people of course are also using or investigating how we might use machine learning techniques and ai in convergence with all of this hpc business to improve what we're trying to do quantum competing uh is of course a further horizon but nevertheless um an increasing prospect for numerical simulations no i know it's uh yeah i don't think so i mean so ibm has a machine now with 53 years something cubits google has 70 or so i mean these machines are getting bigger and bigger there are many really interesting problems to understand in particular um quantum error correction is a huge problem that i don't think is fully under control yet at all um reproducibility is a problem i mean there are many really interesting problems but the the possibilities are also very interesting because this sine problem for example maybe a very nice uh um problem that you can reimagine in this quantum computing scenario and in particular uh we might for example make some progress in the grassman integration where where those variables would be something that we could imagine that integration we could imagine doing in a quantum computing way that removes the determinant part so the determinant came because of the grassman integration that we did analytically if we now numerically evaluate these grassman integrals then there's a possibility that we can sort of subvert uh the the determinant calculation and therefore understand a little bit more for example about the sine problem and indeed how we did calculate these determinants as well so so this is a i mean i this is very speculative i should say but it's a real very interesting question for uh lattice calculations um there's also been some work on regularization alternative regularizations of the path integral using zeta functions um and a very likely model is that you have a quantum accelerator in some sort of classical system so you send a bit of your calculation off to the quantum computer and you wait for it to come back and then you go on and do your classical thing but as i said these are perspectives i'm not quite sure what's going to happen so it's very interesting okay last thing samson happy birthday this is a birthday card that uh because i'm so useless i actually forgot to bring with me but it is sitting at home on my desk um i did manage to get it scanned though so you can see what it looks like uh so this is the front of the birthday card and it's been signed by all your colleagues in the school of maths so happy birthday samson question you say euclidean from this and it's difficult to refer to minkowski to go back to minkowski um so okay um so uh so we typically you typically don't i mean you can well no you can formally at least understand how to do it um but numerically not um but you can also argue that you don't need to because you satisfy the positivity constraints and so so there are there's some real there's some interesting real-time questions that this yeah exactly that this euclidean formulation sort of rules out uh at least at a direct way absolutely yeah yes so so there's i i didn't mention it here but there's i mean so for sure there's a there's a synergy between perturbative calculations and they're they're used extensively in lattice calculations as well both as ways to calculate the coefficients of improvement terms for example and matrix element renormalization factors in matrix elements so there's absolutely it's a whole yeah i said the question about the sociology of the field i mean everyone the same calculations or different methods or how much discussion is there about the results um is there agreement no no but just i mean just to know how this is it's so um since there are no other lattice people here i can say it's a deeply self-critical field um where nothing is is taken for granted so for a start there are many different so there are many different collaborations and they will work on different formulations so in the very early slides i had wilson staggered overlap fermions etc so these are all different groups doing different approaches to the same physics for example and then of course they check to see whether they get the same answer but also there are fundamental sort of more theoretical arguments about whether aspects of for example staggered fermion formulations are really um you can really formally show that they recover qcd in some continuum limits so there are long running uh arguments i would say about these sorts of implementations including also the implementation of um how you understand chiral symmetry uh how you understand heavy quark physics and effective field theories so the the field is actively um discussing amongst itself all of these issues very self-critical actually i would say when this will develop and develop and eventually will become like competing with experiment well it already does okay so experiment has to verify yes and that's already happening samson so i didn't show here but uh in no no i mean there's look i mean it's very nice if you can make predictions and then find these states experimentally of course that's very nice but also there's an interplay between experiment and physics where experiments will measure branching ratios and for example qcd provide lattice qcd provides hadronic matrix elements and it's the combination of those two things that gives you the ckm matrix elements so in that case we don't we can't really access the branching fractions that they can measure they can't calculate or access the hadronic matrix elements so that's an example of a nice synergy but um i i i don't think it's one or the other so it's the anglicized version of uh how you would say it uh in uh in irish so this is brella hamson but uh happy birthday samson