 So first of all, I would like to thank the organizers for this kind invitation to this nice conference. Metolka, I would like to make an overview of the status of the QCD simulations, in particular regarding the properties of strong interactions in extreme conditions. We know that strong interactions which are described in vseh zelo je posledniki v eurom, na češenih režim in na češenih režim, z kaj se posledniki vseh ne pomega. Vseh dobrovjevče in etične pročje, ko vseh prišli vseh drugi in srečenih pročje. Jedna poslednika je, da bi komputi teori numeriklj, po kam za pleasekosti. To je po vseh vseh, kako se nabijš, Pevno drugi serve prič zelo. Zelo razvijačujem, ko prej se nazivati radiom, zače bo ter je, neko si počutak, neko se razvijačuje, neko each se zelo vzelo vzelo vzelo vzelo. Se izložijo izgleda, neko se začuji. prices nutijo, rečaj imajo se, neko se izgleda, neko se začemu neko se lahle. Neče prej se izgleda. Neče je, neko se razvijače, neko se stranrie, ki ste je vse GoPro. Vse je ver shakenršnje, kaj sem porovila, vse je verknja trg, in vse je zgodnien. Vse je vse je zgodnien, vse je zgodnien. Kaj bi se nečeseno večve tih vidja, skupaj, danes skupaj, skupaj, skupaj, kaj si je s Tamamrem in vse je je zgodnien, kaj bi se zgodn Leben, Tym nr. So, e phue words abo lapisqcd, as you know the starting point for computing QCD numerically is the path integral formulation of the theory, so the idea is to rewrite a quantum field theory in terms of a path integral, where expectation values are written in terms of an integral of path integral, then the idea is to discretize the path integral ...in nukledian peljstaj smadtu svih političnih spesih. Zavrža, da moramo pomembniti pleterni vsočenih informacijunji. Na vse zainsprestavljamo, da ga lahko predpravimo ... ...kosticu protojnoj peljstji ... ...vse z Greece in Montekarlo, ko se vendiramo, ...ve in pri tomi, bolj s kopasih boljstv. in sem poslusten, da je bilo se veliko in deločnico, da sem je deločnico na delovaj, vsi da je deločnico. Zato v čestu QCD in Nonabiljanje teoriče, je zelo zelo skupiti teoriče v lakci, kod ležite teoriče in variant, ki je vsečo propostila s Wilsonem. Zelo, da je vsečo zelo, da vsečo zelo v tem, z vsečo zelementari, zespešnje, po nekaj zelo, da je vsečo zelo, in vsečo v tem, da je vsečo zelo. Tako, naredim vsečo zelo, da vsečo zelo vsečo zelo, kaj je vsečo zelo, It is done usually in terms of plaquet variables, so in terms of traces of the product of parallel transports along closed transports. Instead the fermionic option is written in terms of a bilinear form of the fermion fields. And this form, which is called the fermion matrix is a matrix which is built in terms of the vzbih vzbih, kar je vzbih, e.g. vzbih vzbih vzbih, počer, kaj je zelo, da vzistim začnega. Zelo, da vzbih se najbolj našlično vzbih, počer, gažba, našličnije, počer, ko ne vrži, ne bo, da je nožno začnega, ne bo, da je našličnije, otmicenrying, da reči tke informacije o monopolstusi norto kö mop Talk mlerki. So, as long as this term is real candidate for city, we can interpret the partition function in the terms of, we can give a probabilistic interpretation, so we can think of sampling the gauge configurations according to disweight by Monte Carlo algorithm. And in doing this, the most challenging par from number of point in view is 24 k Franco-themion determinant. zelo počasna je nomenikršnjavna v 1974, na kako je nekaj naprejval, ali je bilo vsezavno, ki sem tudi pravila. Svetiče, nekaj, nekaj počasnji je vsezavno tijepizke monstko Carloa Algoriv, ki postavimo nisi vzicev. Tako kaj, so je to, in se nekaj način, da se prišlo z njega početku in se se nekaj način, da se nekaj način je počet. Protovo je to še však, da se daje počet. Zato se počet, da se način se prišlo, je to način, da se način je počet. Ovo je to z pravdi, da je pravda. Počet je to. Se nekaj, da se nekaj način se počet, is the hybrid Monte Carlo algorithm, in which one introduces a set of auxiliary fields. The first fields are the so-called pseudofermium fields, which are used to bosonize the determinant, so that we can treat it numerically. And the second auxiliary fields are the conjugate, so-called conjugate momenta, which live in the algebra of the gauge group. And after writing this in this way, the typical algorithm samples the conjugate momenta to the gauge fields and the pseudofermium fields, like Gaussian variables, they are simply Gaussian variables. And then the gauge fields are evaluated with an algorithm, which resembles the usual molecular dynamics algorithm, which are used also in condensed matter physics. So, we have a molecular dynamics evolution along this trajectory. And at the end of the trajectory, one performs a metropolis step to ensure that the algorithm is exact. Now, the most expensive part in this algorithm comes during the molecular dynamics evolution, because the part of the force, which is used in this dynamical evolution, which is used to compute the derivative of momenta, involves the computation of the inverse of the fermion matrix, actually of the fermion matrix times mm dagger. So, this inversion is the most expensive part, and we discuss, this is strongly related to the physical properties of QCD. So, let me go to the computational complexity. So, let me take for granted that the most expensive task is to invert this matrix M. What are the characteristics of this matrix? Well, this is a sparse matrix, because it usually connects only nearest eggboards on the lattice, but it is a huge matrix, which is L over A to the fourth times L over A to the fourth. It connects every side of the lattice to the other side. So, here L is the spatial size of the lattice, and A is the lattice spacing. So, L over A is the number of lattice spacing in one direction, and we live in four dimensions. So, we have this kind of dimension plus color and flavor indexes. So, let me tell you the typical values of L and A that we would like to use in realistic simulation of QCD. Well, the spatial size must be large enough, in particular, larger than the largest physical length that we have on our lattice. In our case, it is the inverse of the pion mass, which is a light state in strong interactions, which means much larger than one fermi in size. The lattice spacing, in order to have control over the lattice artifacts, must be small enough, and in particular smaller than the shortest length. So, it must be much smaller than the inverse of lambda QCD, at least, which is around 200 MeB, but depending on the physics that we would like to study, it must be even shorter. For instance, if we want to study heavy quark physics, we need lattice spacing at that scale. So, given these two constraints, we end up with a number of lattice sizes in each direction, which should be at least of order 100, which means a matrix, which is of this size, 10 to the 8 times 10 to the 8. Now, during the inversion of this matrix, the most important characteristic of this matrix is its condition number, I mean the ratio between the smallest and the highest eigenvalue. So, the highest eigenvalue of the matrix, so this matrix is basically the discretization of the dera cooperator plus the master in the fermion action. So, the highest eigenvalue is of order 10, say that the parallel transports are SU3 matrix is connecting different sizes of the lattice and there are order 10 different nearest-neck-boost sizes on the lattice. What about the smallest eigenvalue? Well, there is one property of QCD, and in the initial, now it is chiral symmetry breaking, which means that the dera cooperator has a number of eigenvalues, which are very close to zero. So, that means that the smallest eigenvalue is basically dictated by the quark mass, because we can have zero eigenvalue for this term. And this is related to chiral symmetry breaking. So, the smallest eigenvalue is of order 8 times mf squared, and another property of QCD is that it is almost chiral, meaning that the mass of the two lightest fermions, lattice quarks, is much less than the typical scale of the theory, which is lambda QCD, is of the order 10 to the minus 2. So, that means that if we want a lattice spacing, which is much smaller than the inverse of lambda QCD, we need 8 times mf, which is at least of the order of 10 to the minus 3. So, we end up with a condition number, which is huge for this matrix, and this is the origin of the computational difficulty. So, what can we do to fight against this? We can proceed in two ways by either inventing, devising new algorithms, which permits to approach this inversion and all the molecular dynamics better. And so, I report here an estimate, which was done in 2001, of the effort, numerical effort, which is needed to produce a certain number of the correlated gauge field configurations, which are used to sample the path integral. So, the estimate is the following. Here, the various parameters are the lattice sides, both spatial and temporal, the quark masses, the lattice spacing, and this is given in terms of teraflops per year, means a machine of one teraflop used for one year. And if you use this number, you end up to produce 100 well-decorated configurations with the physical quark masses. In 2001, one will need of the order of 10 to the 26 floating point operations. Now, there are a number of improvements on the algorithmic side in the years. So, here there is a short list. So, the molecular dynamics is based on simplistic integrators. So, what can you say in order? Simplating integrators, multiple step integrators. What can you use preconditioning for the matrix and one can do what is called deflation, I mean computing exactly the lowest eigenvectors and factorize them out. So, after these improvements, the situation 10 years later, so I report an estimate, which is a delactus conference, which is 11 years later, was for the same number of the correlated configurations went down to 10 to the 22 floating point operations. So, that means a gain of the order of, four order of magnitudes. Now, the other possibility is to fight by building new machines. So, lattice QCD has been, that from the beginning, a laboratory for the construction of high performance architectures. So, after the Wilson proposed this way to discretize QCD in a lattice, the first numerical simulations were done by mycroids and he started the lattice QCD simulations without dynamical fermions, which is easier on a machine which had the power of 10 megaflok in around 1979. Now, the lattice QCD has been a laboratory for the construction of new machines. Here, I just remember the series of up machines, which have been built by INFN. This project was pushed by Nikola Khabibo. At that time it was a beautiful hero for the construction of new high performance architectures because there was a time where physicists were sitting down and thinking of what was the best architecture to do what they wanted. One of the powers of the upper process or one of the features was to be able to do complex algebra directly, something which was unthinkable for the industrial processor at that time. So, this was an example and then also the series of new gene machines which have been developed by IBM basically follow this proportion coming from lattice QCD simulations for the construction of new machines. So, an interesting question is who wins? Is this better to think of better machines or better algorithms? So, I focus here over this period going from 2001 to 2012 because if you do the same exercise for different periods, it can be different. But in this period there has been a factor ten to the four improvement in algorithms which is what I discussed. And what about machines? If you compare the most power machines on Earth which is given in this top 500 machines that you find online, in this year and this year you see that there is a factor ten to the three improvement. So, basically in one decade we had an improvement of seven order of magnitudes which is essentially coming from both sides. So, slightly more from algorithmic improvement but for that can depend on the exact period that you choose. So, both things are essential for the progress of the field. So, nowadays we can say that we have reached the possibility of performing realistic simulations for several aspects of strong interactions and here I show for instance when people show in the determination of adron masses and nowadays we can reach a precision in the determination of adron masses at least for the lowest lying adrons which is at the level of 1%. In the following year we will focus on some particular aspects of lattice QCD simulations which regard QCD under extreme conditions meaning high temperature, finite variant density which are aspects which are fundamental for various fields going from astrophysics, cosmology and the physics of avion experiments. So, when we want to compute QCD at finite temperature Q is just to compute the functional integral in the Euclidean formulation with a compactified time and of course the compactified time gives the inverse of the temperature so sample averages give us access to equilibrium properties so we can compute the properties in different phases of the theory and study the nature of the phase transitions. Of course what we cannot do at least we cannot do trivially is the computation of out of equilibrium properties and of transport properties. So, when we go to finite temperature of course one of the most important questions that we would like to answer is whether QCD decompines so whether the color degrees of freedom liberate at some stage and whether other essential properties of QCD at zero temperature should change like kairosymmetry breaking. So, there are clear evidences that indeed the confinement takes place so this is a nice picture taken from a study of some years ago which was done in the theory without dynamical fermions but similar results are obtained with dynamical fermions and this is a study of the static potential between two static between a static quark and a static anti-quark and you see that in the low temperature regime you have clearly a linearly rising potential meaning that there is confinement while in the high temperature regime the potential flattens and we have confining properties of the theory are lost. There are also other ways to see that indeed there is this transition by looking at the thermodynamical properties of the theory so this is the behavior of the energy density of strongly interacting matter as a function of temperature so there is a sharp jump at a temperature which is of the order of 200 MeV and the energy density goes rapidly to a value which is close to the Stefan Boltzmann limit for an ensemble of free quarks and gluons. The same jump is seen also in the quark number fluctuations so these are the susceptibilities of the numbers of U up and down quarks and you see that there is also here a sharp jump which means that these degrees of freedom are getting liberated and at the same time we have more or less at the same temperature we have the chiral condensate which is an order parameter for chiral symmetry breaking that goes down so more or less at the same temperature we have both the restoration of chiral symmetry and the confinement. Now what is the temperature of the transition now there is agreement between different lattice groups and if we take the temperature from the chiral condensate so from the point where the chiral condensate drops we get a temperature which is of the order of 150 MeV Now what about the nature of the transition so in QCD we don't have when we have finite quark masses we don't have exact symmetries so we have exact symmetries of the theory in absence of dynamical quarks in absence of dynamical fermions and we have also exact symmetries and we have exactly massless fermions in this case we have chiral symmetry but for physical quark masses there is no exact symmetry so there is no reason to expect a true transition and indeed this is the case we have that at the physical point simulations with quarks with physical masses show that there is no discontinuity or no divergence appearing in the thermodynamical quantities so that we have simply what we usually name as a crossover so an analytic jump between two different regimes well of course we can also study we can change the massess of the quarks as we want in numerical simulations we can change the number of flavors so we can study how the transition changes as a function of the quark masses and the number of flavors and this information is summarized in this plot which is known as the columbia plot so in this plot along this axis you have the up and down quark masses going from zero to infinity and here you have the strange quark mass going from zero to infinity so basically this is the two-flavor theory with just two-flavor of quarks and you see that there are first-order transitions appearing just in the limit of very high quark masses which are close to the pure gauge theory and for light quark masses so for instance in the nf equal 3 theory going to the current limit you want to expect also from the university arguments that there is a first-order transition and this is indeed the case in lattice simulations even if there are issues concerning the continuum limit of this prediction now the quissetti phase diagram is not just temperature so we would like to explore the theory also introducing new parameters like a biochemical potential which is an relevant parameter to describe avian conditions or astrophysical objects like neutron stars so we can also introduce other parameters like external background fields, magnetic background fields and so on so up to now I discussed the finite temperature behavior of the theory where we have a phase transition which is not a true phase transition with the pseudo-critical temperature of the order of 150 MeV now there are various questions that we would like to answer for instance how the location of the critical temperature changes as a function of these parameters as a function of the biochemical potential for instance and what are the properties of the different phases and in particular there is one possibility that is predicted by effective models of quissetti that in this region the low temperature and high biochemical potential region the transition becomes first order so in this case one will expect a critical end point of this first order line if the transition at zero chemical potential is a simple crossover so in this case we want to introduce a biochemical potential in the phase one of the problems that are currently unsolved in QCD and the reason is that when we try to compute the grand canonical partition function with a zero biochemical potential the fermion matrix so the dirac operator gets modified because you have to introduce a chemical potential in the action and the dirac operator changes so that the determinant of the fermion matrix becomes complex so the dirac operator is no, it is anti-ermitian at zero chemical potential and it is no more anti-ermitian when you introduce a non-zero biochemical potential so that means that we cannot apply standard Monte Carlo simulations which are not feasible and there are a number of ways to deal with this problem approximately and approximately I mean that these methods are effective and viable only for small values of the chemical potential and they are listed here in the first is a Taylor expansion so essentially we can try to reconstruct the properties of the theory for small values of mu of the chemical potential and the expectation values at mu equals zero by computing the Taylor expansion of the partition function as a function of mu another possibility is to use what is called reweighting I mean one can sample the partition function with the weight which is given by taking the modulus of the determinant and then take the phrase in the physical observables and then another possibility is that imaginary values of the chemical potential so if you take here mu b as a purely imaginary quantity then the determinant is positive again and you can think of making analytic continuation of your results from imaginary to real chemical potential well all these methods work well only in a limited range of mu over t and do not solve the problem completely but now we are still missing a way to solve the problem completely there are several approaches which are being trying like performing Langevin simulations or adopting the density of the states method or performing simulations on a left shift symbol or trying to rewrite the wall partition function in terms of new variables so that in terms of these new variables the weight is positive again none of these methods are still fully operative for lattice VCD they work in some simplified models but still we don't have results for full theory so what I can show you as a reliable results for the phase diagram are just results which regard the physics at zero at a small biochemical potential and one of the questions that can be answered today is what is the behavior of the theoretical temperature as a function of the baryon chemical potential for small values of the chemical potential so in this case you can perform a Taylor expansion of this quantity and take into account just the first term of the expansion which is quadratic in mu because the theory of zero chemical potential is invariant under trans conjugation so the partition function is an even function of the chemical potential and this coefficient here is what is called the curvature of the critical line so it gives information about let me go back a couple of slides the curvature of this line at mu equal to zero so how this line starts to bend down as you increase the chemical potential so for instance I will show you some results obtained by the method of imaginary chemical potential so you can think of rotating the chemical potential to the imaginary side so if here mu is the poor chemical potential you take an imaginary mu and the expectation if the theory is analytic around zero chemical potential expectation is that the critical temperature will move in this way with just a change of sign from here to here so you obtain the value of this parameter tap of the simulations at imaginary values so in practice what you do is to repeat your simulations at non-zero values of the imaginary chemical potential so for instance this is the value of the chiral condensate at zero chemical potential at non-zero chemical potential so you see that the transition moves to higher temperatures here we have the temperature axis and this is the susceptibility of the chiral condensate so you can determine the transition either from the flex point of the chiral condensate or from the peak of the susceptibility of the chiral condensate in both cases you see that the critical temperature goes up and so if you take the so the critical temperature obtained from the different quantities as a function of the chemical potential square you can fit the behavior for small values of the chemical potential and extract a value of the curvature I mean of linear dependence in mu square so this is the results that we obtain and here the band is the one standard to standard the three standard deviations which obtain from our computation and these are compared with the determinations of the what is called the freeze out point in the phase diagram I mean the freeze out temperature which is the last point of equilibration for adrenic matter coming out of the fireball which is produced in AI and collisions so this is obtained from numerical data obtained from our paper in this case instead I report different values of the curvature from different lattice collaborations you can see that apart from some results which are from a few years ago where maybe systematic effects were not well under control now there is a good agreement between the different lattice determinations for this curvature which is here compared also to determinations of the curvature from the freeze out curve so from experiments now unfortunately this good control over systematic effects that we have today for the physics that small baryon chemical potential is not attainable at moment for the determination of the critical end point so there are different determinations of the critical end point but from different collaborations but no clear convergence of results and no control over the systematic effects so let me switch to the last subject of how much time we had so one last issue is theta dependence in QCD so we know that gauge field configurations which are relevant to the pat integral are divided in homotopic classes which are characterized by winding number so the which is the topological charge so the QCD partition function can be the QCD action can be modified by introducing a parameter which is coupled to the topological charge the so-called theta parameter which gives us a new version of QCD which is still normalizable and presents an explicit breaking of Cp symmetry so in this case the Euclidean pat integral is also the measure is also complex so direct simulations at zero theta are not possible but as for the case of the finite chemical potential we can think of making an expansion of the free energy of the theory in terms of this theta parameter and the coefficients can be computed by simulations at theta equals zero so by standard simulations of QCD and for instance the first term in expansion is the so-called topological susceptibility the probability distribution at theta equals zero which determines the coefficient of this expansion is a full in perturbative property of QCD so numerical simulations are the best way to compute it so what are the predictions about theta dependence we can make analytic predictions in some regimes for instance we know that there are classical solutions having zero winding number which are the instantons one can perform perturbative expansion around these solutions and obtain an effective weight for instanton solutions which is the following but is based on a one loop expansion and here g is the coupling at the radius of the instanton at the scale of the radius of the instanton so this expansion is expected to break down for large instantons and indeed it does so in the QCD vacuum but when we are at high temperature instantons of large size are suppressed by thermal fluctuations so that the perturbative expansion is expected to work well and so we expect to have instantons with an effective action which is the following that means that instantons are strongly suppressed as we the temperature increases and we can treat the so theta dependence in terms of an ensemble of instanton and the instantons which are not interacting among themselves because they are very dilute so this is the so-called dilute instanton gas approximation which gives us a very well definite prediction for the behavior of the topological susceptibility as a function of temperature which is suppressed as a power of the temperature which with a power low which is with an exponent which is close to minus 8 instead when we are at zero temperature reliable estimate comes from chiral perturbation theory which gives us an expectation which is for susceptibility around 80 MeV to the fourth now what is the interest in QCD at non-zero theta if we look at experiment non-zero theta breaks cp symmetry explicitly but we know from experiments in particular from experiments regarding the electric dipole moment of the neutron that there are very stringent limits on this amount of cp violation which sets a limit around theta which must be less than 10 to the minus 10 so the question is why do we bother about theta dependence well there are several reasons first theta dependence enters phenomenology anyway because there are for instance in the Witte-Manitzano mechanism the topological susceptibility in the sequential limit is related to the eta prime meson mass but also the question is why theta is equal to zero and this is one of the open questions in the standard model which points maybe to physics beyond the standard model so one possibility would be to have a zero pork mass which is a possibility which is realed out there is another mechanism which invokes the existence of a new scalar field which is the axion whose properties are largely fixed by theta dependence and by the way axions are also a popular dark matter candidate so their phenomenology is particularly important so what is the idea of the qcd axion very shortly there are several high energy models which predicts a qcd axion so extension of the standard model which basically end up with the same low energy effective Lagrangian in which we have this scalar field which is the Boston boson of some u1 symmetry which is spontaneously broken this is the axion symmetry and the axion field is the boson of this spontaneously broken symmetry it is a pseudo boson because there is also an explicit breaking because the axion couples also to gg dual to the topological charge so it is like having a n-hat potential with a tilt which is due to the coupling of the axion field to the topological charge density so what happens is that the axion the potential takes a minimum and this minimum corresponds to a zero effective theta so it explains why theta is zero in qcd and it also gives to the axion a mass and the mass of the axion is related to the curvature of the potential around the minimum and this curvature is related to theta dependence from qcd so it is related to theta dependence from qcd so in particular we can write a relation between the axion mass and the topological susceptibility divided by this coupling constant fA which appears in this low energy Lagrangian basically one can fix the axion parameter during the universal evolution by knowing what is the evolution and the topological susceptibility of the axion mass as a function of temperature the main source of axions that we can think to have today the main mechanism is known as misalignment so the time where we have the spontaneous breaking of the potential in symmetry the axion field is not aligned around the minimum and this that gives you some energy which is spent in the production of axions so one can write an evolution assuming a zero mode approximation for the axion field which is like a damped harmonic oscillator where the damping term comes from the universal expansion so it is related to double constant and the harmonic force potential is given by the mass of the action so depending on the values of the two quantities the harmonic oscillator can be damped or can oscillate and since the double constant goes down with the universal evolution and the mass of the action is expected to go up because it is related to the topological susceptibility and the topological susceptibility is suppressed at very high temperatures so it goes up as you decrease the temperature there is a point where the two quantities cross and this oscillator starts to have oscillating solutions from that time on you can write an adiabatic invariant which fixes the number of axions today so essentially the amount of axions which is due to this mechanism is fixed by this crossing point which is fixed by the behavior of the topological susceptibility in QCD essentially so if you want to set a bound on the amount of axions which is fixed by the matter that we see today we can set a bound on this coupling constant which is an unknown constant in the model and on the other hand that fixes a value for the mass of the axion today which is important for the experiments which try to compute to detect axions today just that my last few slides we can study QCD in topological properties by lattice simulations there are a number of issues let me go to the most important issue that is why I am discussing it here one problem which is met by standard algorithms which try to sample the functional integral is that the topology is a continuum property of gauge fields so as you go towards the continuum standard arguments start to have difficulties in changing the topological sector so they should change the winding number of gauge fields and this becomes more and more difficult as you approach the continuum because you should go through gauge configurations which are discontinues and this is suppressed by the functional integral measure so we skip these slides so what happens is the following this is the topological charge evolution during three different Monte Carlo simulations for three different decreasing values of the lattice spacing as you see that this is a function of the Monte Carlo time you see that for lattice spacing which are around 0.1 fermi you have a good evolution and as you go below 0.05 fermi the algorithm gets stuck I mean it's not ergodic anymore it's not able to explore different topological sectors so you cannot sample what is the value of the topological charge distribution p of q which is needed now small lattice spacing when you go to find a temperature means high temperature so you have a difficulty of computing the topological sensitivity at high values of the temperature which is what we would like to do to have relevant information for axiom cosmology so this is the result that we have obtained in our study which is surprising because we obtained an exponent for the decrease of the topological sensitivity which is half the prediction of the dilute instanton gas approximation but we are able to go up to say 500 600 MeV we will need to go to a few to say something really relevant to axiom cosmology if we believe that this exponent for the decrease of the topological sensitivity is true we get a value of the axiom mass today which is around 10 MeV dilute instanton gas approximation will predict a value which is around 100 MeV this difference is fundamental for experiments because it means detectable or not detectable by experiments which are available today and in this case we need new algorithms to go to higher temperature and there are several possibilities which are being explored so I go to my conclusions so I have shown you that numerical simulations of QCD on a lattice represent a numerical challenge which has been approached successfully for several aspects we had progress both on the development of algorithms of new machines so that we have now reached a mature time where we can make precise QCD predictions about several aspects like adrom masses and physics but there are still some hot issues which need progress and in this case progress does not mean just improvement in the algorithms but it means breakthrough we need some really new algorithm to deal with the freezing of topological modes in the continuum limit or to be able to to compute QCD at finite biochemical potential so we need a breakthrough in new algorithm or in the computational approach and of course one possibility for the future is also quantum computing and we will have a few talks in this conference so that's all, thank you