 Thank you. So good morning, everyone. This is the last lecture. I'll be here for through the coffee break, but at the end of the coffee break, unfortunately, I have to leave. So make sure that, you know, if you have questions at the end, you you come up to me because that's the last opportunity. But of course, you do have my email address and feel free to contact me to send me mail if you have questions. So let me start today's lecture from where we've been discussing a bit that the you know, to which extent QCD, QCD calculations can be can be accurate. What are the tools that we've developed to better understand the structure of the final states? I gave some examples of next to the processes for which we have next to leading order results. Precision is in the range of 10%. So for jets, we discussed yesterday the top cross section a bit and I told you that we're down at the level of 5%. Nowadays, there is a the next to next to leading order calculation available. The question is how far can we really go? And of course one can ideally arbitrarily improve on the theory side by just doing more and more and more advanced calculations, two loops, three loops, etc. It's a very complicated program because it's not enough to have a calculation, say three loops of a matrix element. That's something that everybody can do in the sense it's you sitting down and doing a calculation if you can. But at that level of precision, you also need the part on distribute the PDFs of course. And the PDFs is not something that you can settle yourself. You need very accurate data to be used and you need very accurate predictions for those specific observables that have to be compared against the data. So it's not obvious. It's very hard to extrapolate what the ultimate precision will be in the future. Certainly things will continue improving. So one thing which is interesting is to try to see if there are other ways in which we can achieve precision by playing with what we have. And one possibility which I would like to introduce you to is this idea of looking at cross-section ratios at different energies. Namely, we look at the process at, say, 7 TV collision and we look at exactly the same process at 14 TV. Now, this is something that four years, five years ago, we would have never considered simply because BLHC was supposed to start at 14 TV and to run all of its life at 14 TV. For whatever reason, it had to start at lower energy because the dipoles could not cope as they were with a higher energy. So we do have a large data sample at 7 and 8 TV and now we're moving to 13. And this provides opportunities, but then people started thinking about how to exploit. So what is suggested here is to look at the ratio. We take a process X, E1 and E2 are two different beam energies. And we look at the ratio of the cross-section for the process X of one energy over the other one. So what happens when we take this ratio is that typically several origins of theoretical systematic uncertainty is cancelled. For example, the mass of the top is the mass of the top. It doesn't matter what's the energy. So even though we don't know the mass of the top very precisely, and therefore the top cross-section has some residual 3% uncertainty coming from not knowing the mass of the top, but what we know is that the mass of the top will be the same at 13 TV and at 8 TV. So when we look at the systematics of the mass of the top on the cross-section ratio, which means that we take the ratio and we calculate the ratio for different values of the top mass, we have to take exactly the same top mass in the numerator in the denominator. And this ratio will not be exactly one because of course if the top is heavier, the cross-section will be the ratio will be slightly larger just because of higher energy, the cross-section for a heavier top is bigger, but certainly will be much, much more stable. Likewise, alpha S, if we have an uncertainty on the value of alpha S, then yes we have it, but it's fully correlated. So there are correlations, there are PDF correlations and the expectation is that they will cancel. Also scale uncertainties, namely the uncertainties due to the intrinsic lack of higher order corrections should be correlated because if we're looking at exactly the same process, then at the partonic level we're dealing with exactly the same object and say scale variations which arise from the presence of logs coming out of the renormalization process, they will be fully correlated. It's legitimate to take the same scale at one energy as we are taking at the different energy because again, to the extent that we are dealing with exactly the same process, with exactly the same kinematics because it has to be at the same, say the same PT if we're looking at jets, the loop corrections will give rise to renormalization will be the same and likewise experimentally of course taking ratios several of the experimental systematics will cancel. So there is the expectation that these ratios are something that could be predicted theoretically with precision well beyond, well below one percent and likewise hopefully experimentally can achieve similar precision. In addition to that, one can look at double ratios, namely we take two processes and we look at the cross section ratio of a process relative to another one at a given energy and then we take the ratio of this ratio at a different energy. The reason to do that is that for example in this way there is absolutely no luminosity uncertainty, what's the luminosity uncertainty? Luminosity is pretty much the measure of the number of collisions that have taken place. If I want to know a cross section, I count the number of events out of the number of collisions and that tells me what's the probability for that process. But that requires that I know exactly how many collisions have taken place and it's very hard experimentally from the point of view of the accelerator to determine very precisely the number of collisions, the intensity of a bunch is right because we want to know it to two one percent. That's very hard to do. Of course the luminosity is exactly the same independently of what is the process that I'm looking at. So if I measure the Z cross section and the W cross section and counting events of Z boson decay, W boson decay, if I take the ratio of this number of events, of course the absolute luminosity will drop out. So that is independent of that and that's one reason why we do this. The reason is that the luminosities are not correlated at different energies because it's independent trans and therefore they have independent systematic uncertainties. The typical uncertainty at the LHC on the determination of luminosities of the absolute event rate is of the order of 2.5 percent. So that by itself sets the optimal limit, the asymptotic limit to the precision of any experimental measurement. So let us see in practice what that means. In this table, which contains of course many numbers as you see, there are several processes and this table represents the ratios of given cross sections or ratios of cross sections at 14 normalized to ATV. So this is for instance the double ratio. We take the TT bar cross section normalized by the Z cross section at a given energy divided by that ratio at a different energy. This is the value of this specific double ratio, so it's a number. And these are the uncertainties in relative percent coming from the PDFs, coming from the uncertainty on alpha s and coming from this case. So the perturbative uncertainty coming from the fact that we don't know the process with up to beyond the given order in perturbation theory. And as you see, all of these numbers are in the range or below 1 percent. The TT bar cross section itself, so the absolute TT bar cross section at 14 TV divided by the TT bar cross section at ATV is known with, you see it's a fraction of the percent. This is from minus 0.38 percent plus 1.07 percent. So it's a plus or minus 0.7 percent. So it's better than 1 percent. I remind you the absolute cross section itself at a fixed energy due to the scale is more like 3 percent, so improved by a factor of 4 that systematics. The PDF uncertainty which was about another 3-4 percent is down to 0.8 percent, etc. There are processes like for example the Z and W production where things are even more remarkable. We're talking about, you know, 0.1. This is really per mill accuracy. 3 per mill, 1 per mill, 4 per mill, 3 per mill in the PDF uncertainties. Now, when we have a prediction which is so precise, of course we become, well, the prediction is very precise also because indeed this ratio is very constrained. So we cannot expect that we do a measurement and this ratio will be, you know, 20-30 percent away from our prediction. Nevertheless, with this level of accuracy by canceling many systematic uncertainties, we are sensitive to possible slight deviations from what we expect. For example, the PDFs indeed could make, you know, extrapolations that are not very well controlled because by going to higher energy we change the range in X and therefore this is a very important independent test of the PDFs. And this is shown in the table below. So these PDF uncertainties were coming from a given set of PDFs. Down here we are comparing the results and the uncertainties of different sets. So these labels correspond to, you remember what we saw in the first day, different groups doing PDF fits. And if you go through this table, you see that there are several examples in which predictions from different PDF sets differ at the level of 3 or 4 sigma, even in the case of the W and Z rates. So that means that once we have precise measurements of 13 TV, we look at the ratio of W cross-section from 13 or 14 TV to 7 A TV and we have a probe and we can tell, you know, whether a set of PDFs is right or wrong at the level of 3 sigma. So it's an additional useful independent tool to improve our knowledge about the proton. In addition to that, it could be also of interest to probe possible new physics. So let's look at these considerations. Let us assume that we're looking at a final state which receives both standard model and beyond the standard model contributions. This could be, for example, top production. I mentioned yesterday at some point, you know, we produce TT bar, but we could be producing stop-antistop, the supersymmetric pattern of the top and the stop decays to the top. So the final state of the stop-antistop will contain a TT bar. So when we measure the TT bar cross-section, we might be including a contamination from other new physics. So we are adding then what the experiments will see is the sum of the standard model contribution plus the BSM contribution. Now let us look at the ratio, therefore, of what the experiments see at two different energies. Well, here I put in 7 and 8, but you can just put E1 and E2. So this is our ratio, R7 and 8, and we can plug in this expression. So taking into account the both numerator and denominator contain a standard model and a BSM contribution, and we can expand this contribution by assuming that the BSM is actually small. Okay, if it's big, then presumably things will be seen right away. We are working under the assumption that the BSM contribution is so small that one really needs a very sophisticated, a very precise probe to be able to see it. So that justifies in this ratio expanding in powers of BSM contribution. And what you can easily find is that, therefore, this ratio is equal to the standard model ratio times 1 plus something which is proportional, of course, to the ratio between the BSM and the standard model. That's obvious, the larger is the BSM, the larger is the correction, but then there is a term, this delta 7, 8, which is expanded here, but in words it represents how much the ratio of BSM to standard model varies as a function of energy, because obviously we're sensitive to that. If the BSM contribution at 7 TV were 10%, and the BSM contribution at 14 TV were also 10%, when we take the ratio, we don't see a difference, right? BSM is equal, for example, to 0.1 times standard model at both energies, then standard model plus BSM will be equal to 1.1 times standard model at a given energy, and if we do it at the other energy, this is E1, and if we do it at E2, and the ratio remains the same, we also have 1.1 standard model, so this ratio is exactly one, we don't see any deviation. So in order for us to see something, we need the BSM contribution to vary as a function of energy in a different way than the standard model contribution, and that's why there is this additional term. So the BSM contribution has to be visible, there has to be some, and then it has to evolve in a different way, okay? So this is the bottom line. And why should it evolve differently? Well, there are many reasons why it should evolve differently. For example, the origin of this BSM contribution could be different. If we are looking at, say, the TT bar cross-section, let's say we have a contribution that comes from a resonance, Z' that decays to TT bar, and let's say that this Z' has a mass, which is, say, 500 GeV. So 500 GeV is larger than the minimum, than the threshold for the production of TT bar. In order to produce this resonance, we need to put in at least 500 GeV. So it's obvious, and number one. Number two, the Z' will come from a QQ bar annihilation, while TT bar comes from a Gluglu annihilation. So as we go to higher energy, of course it becomes much easier to produce a very heavy object if we have more energy available. So the contribution, so the relative fraction of TT bar events that come from Z' will be significantly larger and higher energy than it is at lower energy. Because going to higher energy, it's easier to produce these Z' and this is what will give rise to these dependence. So here are some examples of how these luminosities of different initial states evolve, ratio of luminosities of different energies evolve. And the bottom line is the following. The bottom line is that since these energy ratio, cross-section ratios, predictions have a precision at the level of a fraction of a percent, there is sensitivity to contributions coming from BSM processes at the level of a few percent. Typically, if we have a three percent contamination to the top cross-section, we wouldn't see it from the measurement of the top cross-section because we know that there is a three-four percent uncertainty, so it will be hidden within this uncertainty. But that three percent contamination could be visible if we look at the ratios between 14 TV and 7 TV just because the ratio is much, much more robust, is much more precisely determined. So one example of a possible application of these ideas came up during Run 1. At some point during Run 1 there were measurements of the WW cross-section at 7 TV and the WW cross-sections at 8 TV. This is the theoretical prediction with this red point with the uncertainty. These were the measurements from 7 TV from the experiments, you see slightly high, only one Sigma, but certainly high, all of them. And this was the first measurement that was reported by CMS at 8 TV and it starts being again high. So by looking at the relation between the theory and the data here and here you wouldn't conclude much because as we said it's one Sigma, right? On the other hand, and that's because these uncertainty, these error bars are quite large, on the other hand if we take the ratio between these two measurements a lot of the systematics go away in principle and therefore the discrepancy between the ratio of the data and the ratio of the theory could be enhanced and become maybe three or four Sigma. In this case, unfortunately the experimental uncertainty is not coming from systematics, it's really coming from statistics because at 7 TV especially the luminosity was relatively low, so most of these uncertainty bands is actually statistics and statistics there is nothing you can do about, right? Because that is fully uncorrelated. On the other hand, asymptotically when the statistics are good and we are just down to fighting against the systematics, this is a powerful technique. Incidentally, the ratio of cross-sections 8 over 7 for the WW final state has an uncertainty that as you see is at the level of 3 per mil, okay? So in principle it is really a very solid prediction. Now, just to make sure that you don't walk out of this room assuming that there is no physics there, this is the current situation with the WW cross-section. On one side there are now more precise experimental measurements coming from 8 TV, both ATLAS and CMS, completely the measurements. On the other, the major new ingredient was the completion of the next-to-next to leading order calculation for the WW cross-section. And this is for example for 7 TV, you see it adds a bit a 10% effect. There is another 10% effect coming at 8 TV. This is the contribution to the WW final state to the cross-section coming from the Higgs channel. It's not immense, it's like 6 or 7%, but it's important. And this is the comparison with the data. So these are the data, 54.4, 52.4, plus or minus 5 and 6, from ATLAS and CMS. This was the original calculation next to leading order, and you see indeed it's almost 2 sigma, the discrepancy, but if we go to next-to-next to leading order, it's 50 against 55, 53 plus or minus 5, so it's down to being 1 sigma. And if we look at the 8 TV data, here there is a bit of an internal inconsistency in the data themselves. ATLAS reports 70, which certainly is high because even at next-to-next to leading order we have 60. So this is 70 plus or minus 6 against 60, so it's 1.5 to 2 sigma. CMS reports 60. One has to add the glue-glue contribution, which is not sorry, I apologize. This measurement here by ATLAS includes the Higgs, so we have to add this number, the Higgs contribution, to the pure next-to-next to leading order WW cross-section. So the theoretical prediction is 60 plus 4, it's 64, which within 1 sigma it's compatible with this 72. Sigma and a half perhaps. And here CMS does not have the Higgs, there is the Higgs subtracted, the measurement is 60 and the theory is 60, so this is perfectly compatible. So there is a slight discrepancy at the level of a couple of sigma between ATLAS and CMS, so we have to wait. We will wait for these measurements at 14 TV, and in particular if there is something weird we will look at variations. So the question is at 8 TV, one decided to maintain the Higgs and the other decided to leave it out. It's just a choice. The measurements are 7 TV, they preceded the established confirmation of the existence of the Higgs, so they were just looking at WW pairs and they could not do the subtraction but they didn't know that the Higgs was there. And that's why. And then of course this analysis because of historical heritage got done in the same way at ATV by ATLAS, CMS these measurements came out few months ago so at that point they had the time to refine the tools to do the subtraction of the Higgs. So there is no specific reason. Well, you know, this is the Higgs going to WW, right, and going to leptons. So it's a contribution that it's there and in principle on an event by event basis you don't know whether it's a Higgs or whether it's a WW pair so you have to subtract it by running the Monte Carlo. So if you wish, the key difference between including it and removing it is that including it means not doing anything. So including typically means that you do something, no here including means you are not doing anything because you just look at the data and you come to events and that's the cross-section. No Higgs means that it doesn't work because you have to run the Monte Carlo through your experiment and see what is the acceptance and then decide to take away those events. So it's a less clean. It's a measurement which is not just an experimental measurement. It also includes a theoretical bias and there, you know, it's a matter of philosophy, what you prefer. To the extent that, you know, the modeling of Higgs production is a safe thing. One has to also take into account that the contribution is not immense and you see it's quoted as being sorry, this is the theoretical cross-section. I cannot remember what is the experimental systematics coming from the from the removal of the of the Higgs. I don't know how large that is, but anyway. What's the benefit that one derives from removing the Higgs? The benefit is that, theoretically, the calculation of the Higgs contribution and the calculation of the pure WW are two different calculations, right? And up to a point because here there start being interferences. So if you just try to remove every contamination it just boils down to being 100% WW. In theory, it's a more straightforward theoretical calculation, right? But it's a matter of philosophy and as always the only thing that's really important is that it be explained exactly what was done. They could quote the cross-section divided by two. And we can argue whether you want the cross-section divided by two. You know, if you tell me that you divided by two, fine, I will divide by two my calculation and I can still live with that. Okay, what's important is that things get documented very precisely so that everybody can reproduce them. And incidentally this plot here is just as a function of energy the theoretical cross-sections at leading order next to leading order which as you see is a large contribution and finally next to next to leading order which is a relatively small addition and which therefore suggests that the next next to next will probably not be immense. All right, this is another way of looking at, you know, trying to generate some precise measurements and precise theoretical predictions. This is the ratio now where at the fixed energy these are data from, I believe it's written here 70V, yeah. This is the ratio of the W plus jets and the Z plus jet cross-section as a function of multiplicity as a function of APT for example of a second leading jet and you see the scale here for example the absolute rate how many events there are W plus jets divided by Z plus jets experimentally the measurement is very precise as long as there is statistics and you see it agrees with theory at least with this theory at the level of, you know, literally 3-4 percent which is, and here too we see it's an excellent agreement. Of course, occasionally there is some theoretical prediction that deviates a lot and then, you know, people go back and look at it and look at exactly what is the issue and here in fact something was found as an origin of this departure and it's been fixed so next year we go back and we have better theoretical calculations. Again, you know before you look at these and you say I've made a discovery because this is a 2-sigma deviation from the standard model you have to start doing some validation so there is a whole process of convincing yourself that the anomaly you've seen is indeed physics that cannot be understood within the standard model. Here if you just take a different calculation you get perfect agreement so that is an indication that perhaps there is, you know, something in your code that has an issue. Okay, so let me go back to the top quark which I quickly started discussing yesterday. The top quark mass measured directly by the experiments is down now at the level of I don't know if you can read it this is the world combination from March 2014, that's the last time that the average between the Tevatron and the LHC data was taken 173.34 plus or minus 0.27 plus or minus 0.71 so it's a plus or minus 800 MV more or less so it's below the GV. Since then there have been new results by the Tevatron experiments which are very accurate again with uncertainty below one GV but somehow high by a couple of sigma relative to the world average and in parallel there is a measurement by the LHC which again is very precise is the single most precise measurement of the top mass from the LHC which is a sigma and a half lower with respect to the world average so if you add them into the world average at the end we still stay at the same level however with these two measurements which are presumably like three and a half sigma away from each other so it's a bit it's a bit bothersome you know these things happen right that's why we have statistical and systematic uncertainties and there are chi-square distributions and there are probabilities and all of that now we discussed the relation between the mass and the cross section and I reminded you yesterday that if we're really talking about going below one GV we absolutely need to have control over the combined theoretical and experimental systematics on the cross section to better than three percent, two and a half percent two percent so now we spend something sometime talking about the issues that arise when we directly measure the top mass in Hadron collisions and I had prepared this couple of slides to introduce the topic and this contains the expression for the meaning of the topic and many of the considerations have in fact been explored in great detail yesterday during Michael Peskin's lecture there is one thing that however I would like to draw your attention to on this equation now the way I wrote this equation has a G-firming front it's a bit bizarre because well it's mostly historical but when we actually do the calculation you just start from the Feynman diagram in which the T goes to B and the W and there is a weak coupling G in this diagram so if we calculate these diagrams square integrate over the phase space that's the result we get which is proportional to G-square so there is no G-firming any longer and this is exactly the equation that Michael wrote down on the blackboard yesterday the one thing that I want you to go back and think about is what happens if I take the W mass equal to 0 in this expression because you know if you think of the W mass as being just a number where you plug it in and you get the rate but you know what's wrong with just taking MW goes to 0 and you see that if we take MW goes to 0 this is not defined because this blows up the width becomes infinite so it's not you know when I say 0 you could say well you know but it's not exactly 0 it's fine take it 1GV take it half a GV all of a sudden this number becomes huge becomes immense it becomes much much bigger than the top mass itself right so that means that there is a problem there and what I would like you to think through because that will help you understand a bit better really the issue of symmetry breaking the Goldstone boson and everything is whether there is a way in which the limit MW goes to 0 can be defined whether there is a unique way of defining the limit there are but I could see at least two ways in which you can define MW goes to 0 two different ways and in principle that could give rise to different to different results and in general you know what are the implications of MW goes to 0 can I take MW to 0 without doing anything to M top for example and to MB so you sit down and you look at this and you think it through and you will learn something hopefully useful unfortunately I will not be here to discuss with you if you want but I'm sure that you certainly might will be happy to discuss with you in case it's needed so the other thing that I wanted to do with this expression for the wheat is just to plug in numbers and to give you the result this extra piece 1 minus 2 alpha over 3 pi this is just the one loop QCD corrections so we emit gluons we exchange gluons and there is a correction which is of the order of few percent so if you put in numbers what we get is 1.34 gV so it's a rather wide object not as wide as the W and the Z but it is certainly wide and it's so wide that it decays very quickly and it decays so quickly that it doesn't have time to hydronize in order for there is some time required for the hydron to form because the hydron means well the top is very heavy right so if you could form a top hydron a top meson we would have the top sitting pretty much still it's a bit like the sun and then the light quark say the U-bar going around with the radius which is large radius because it's very light so the size would be similar to the size of a pile but the system is dominated by the top sitting at rest like the solar system like the sun in the center and the earth going around and the radius of the object is determined by quantum mechanics by the uncertainty principle the inverse of the light quark mass so for this system to come together there has to be at least the time that you know the quark can take a tour around the around the center the top and that's too long compared to the lifetime of the top itself so the top decays before hydronizing and that means that there are no top hydrons to first approximation I will come back to this issue later now if you want to measure the mass of a quark except the top quark so of all of the other quarks say the big quark and the child quark we know how to do it for example we can look at the B meson we look at the exclusive decay of the B meson B goes to Pi Pi B goes to KK B goes to J psi K we fully reconstruct the final state 2, 3, 4 particles and the mass is equal to the sum of the momenta square that's the mass of the hydron another thing that we can do we can take E plus C minus and go to say a resonance a quark and a quark resonance a quarkonium state for example the Upsilon and the mass of the Upsilon is simply the square root of S E plus C minus and once we have the measurement of the mass of this hydron then of course to go to the mass of the quark requires calculating solving the problem of the coupling because the mass of the hydron will be the mass of the constituents plus the energy, the binding energy the kinetic energy and that has to be subtracted if we want to go down to the mass of the quark but we know how to do there are potential models there is lattice QCD so as a function of lambda QCD alpha QCD if you wish and the mass is of the hydrons the mass of the big quark and that's what's done in the literature and that's where our knowledge of the mass comes from if the top hydronized we could be doing exactly the same we produce a top quark the top quark will be joined by an anti-quark form a top meson in the case and we do at least on paper the very mass as the sum of all of the decay products square and then from a lattice calculation the mass of the top quark from the mass of the top hydron that doesn't happen because the top decays before and then the following happens if we're looking at the hydron collision this diagram represents the production process so this is QQ bar goes to a glue and then goes to TT bar it's one of the possible processes and the top is colored it decays to a B and the W decays could be decaying leptonically it could be decaying to quark anti quark it doesn't really matter what is important is that the W is a colorless object and therefore the invariant mass the sum of the decay products of the W is indeed a good representation of the W but you see it's impossible now to add to this collection of particles particles coming from uniquely and without ambiguity the decay of the B quark because the B quark is a quark and the B quark does addronize and in order for the B quark to addronize it has to pick up an anti quark from some place because by itself it cannot go anywhere and if we go back to the diagrams that we drew a couple of days ago for example let's say we have a glue on emitting from the initial state of our pair and as you see the way the color flows this anti quark emitted from the initial state is color connected to the B quark coming from the top decay so the B quark has to go and pull up this anti quark in order to form a B meson and when this B meson decays of course then we take the particles coming from the B meson decay and the best way we can do is to put all of these together in the square of the sum of the momentum to get the experimental measurement of some mass but this is not the mass of the top because you see there is energy coming in from this anti quark and on an event by event basis that energy will be a different number so there is no way we can just isolate the decay products of the top now a priori this is just a nuisance it's not really a deep fundamental problem because if I had an absolutely perfect description of the Hadronization process in other words if I knew if I were guaranteed that my description of how this B picks up an anti quark from the rest of the event well fine it just means that this M squared will not be a peak but will be a distribution a distribution that accounts for the fact that occasionally there is a bit more a bit less of energy and therefore instead of having the sharp resonance corresponding to a fixed mass there will be something with a broader shoulder that represents the fact that occasionally I pull in events I pull in energy from the system but if I know how to parameterize that shape as a function of a top mass I could extract it so here is a bit of a cartoon of what happens this is the hard process then we attach the shower revolution as we did the other day the big work starts emitting gluons and of course by the time it gets at the end of the Bjet it has lost its color but it leaves the color behind then we have to split the gluons we form the color singlet clusters and as you see there is a whole set of clusters here which are all belonging to the top but there will always be one quark somehow that has to go and look for a partner outside and that forms a cluster a singlet cluster that when it decays it decays to possibly several hadrons whose paternity is absolutely ambiguous so what should we do we should keep them inside in the counter of the mass or not as I said it introduces automatically a model dependence which you wouldn't want to have because this is now that kind of model dependence but we don't really control from first principles this is no perturbative QCD so you really have to be confident that your adornization modeling is safe and this is just repeating in other ways ok, so that is an issue and you could argue one day we will really we can put together a collection of observables that we can test against the Monte Carlo's and we will really have a guarantee that the Monte Carlo describes the adornization process correctly once we've done that, sorry and at this level we've got uncertainties at the level of a few hundred MEVs so what we do we take different models and we use different models to describe precisely how that line shape gets deformed we do the exercise of extracting the top mass from the data and we see that the systematics is at the level of a few hundred MEV so it's fine, it's consistent with uncertainties that are being quoted question I guess somehow the mass of the top person is the size of the cone yeah, he is asking the following he is saying of course depending on how I define the cone I will get more or less particles inside the cone, yes of course that's why there is anyway a broadening I'm not picking up exactly, I cannot be guaranteed that I collect every single particle that comes from the top decay of course because I'm using a fixed cone I will get a different number and I will get a distribution and then I parametrize if I can generate events with my Monte Carlo in which I say if the top mass is 170 GV and I use a cone of say 0.4 and I use that to define my jets then I get a distribution of m square, square root of m square which is this one and this was 470 if I take 171 I get the distribution from the Monte Carlo which is this one and if I do 172 I get it which is this one so I generate templates of distributions and I compare them against the data and the data will fall some place, maybe will fall here and then I conclude that the mass is 171 okay you know, experiments don't measure masses anyway right when a particle goes through a calorimeter it interacts with the neutrons with the positive energy the energy becomes light it's taken by a light fiber it goes to a photomultiplier and a photomultiplier counts electrons and we turn the number of electrons that have been seen into at the NGV through a process of calibration right it's the scale that you put the top on it and it tells you 1.5 kilos the measurement is a complicated object so this is just part of calibrating and defining templates that connect what is happening with and the way what is happening is measured to the units that we consider interesting so let's say we have a Monte Carlo that describes absolutely the calculation process perfectly and then we can pull out the mass of the top what is the mass of the top however in the Monte Carlo it's the number that we put in so it's the mass of this top quark that we use when we do the calculation of the process and the issue is what does that mass correspond to and there is a debate that some of you may have heard of about as to whether it's a pole mass it's a running mass what is that object because as you know masses like couplings in field theory they run so we have to define the mass in a very specific way as soon as we are dealing with radiative corrections if we want to be accurate for example well the pole mass is defined in general as the mass in the propagator so for a stable particle the pole mass is exactly its weight if you wish when the particle is stable it's sitting there and its mass corresponds to the pole mass particles which are unstable but not well defined we can define the mass as being the parameter the mass parameter typically at the scale of the mass itself that's what we call usually the pole mass and that numbers differ from what you would call the pole mass in the case of the top the difference between the pole mass and the ms bar mass is about 10 gv so it's not a small number the difference is of order alpha s times the mass because it comes from one loop QCD corrections alpha s is 0.1 the mass is 170 it's actually alpha s divided by pi so it's a bit less than 10 gv it's a significant number so to make you appreciate why that is relevant in the case of the top quark in Hadron collisions I want to show you a simple example which is taken from electromagnetism and it is the determination of the mass of the muon in an ideal experimental context that I'm about to illustrate so let's take the muon the muon goes to electron and two neutrinos and the muon goes to electron and photon if it did because it's a conceptual exercise so the mass of the muon the pole mass is the mass of the muon the one we would reconstruct if we take a muon in vacuum and we let it decay and if we could measure the momentum of the neutrino the anti-neutrino this expression is the mass of the muon square now let's take a muon instead and we put it interacting with the field and we take the muon and we put it around the proton so we bind the muon to a proton we have like a hydrogen atom with a muon instead of the electron and now what we're dealing with is a system whose total energy is the mass of the proton plus the mass of the muon plus the kinetic and the potential energy the kinetic energy of the muon because the muon is moving of course and the potential energy because there is a field and you can easily calculate the contribution of kinetic plus potential energy using the virial theorem for example and what you get is minus m mu alpha squared divided by 2 where alpha is the electromagnetic coupling so we can still write this object as being the mass of the proton plus the mass of the muon star where this m star is m mu times 1 minus alpha squared divided by 2 so we are absorbing in the mass of the muon the field, the potential energy and now why is this mass m star interesting well the reason why it's interesting is that if now I measure the muon mass by looking at the muon decay from this system that's exactly what I measure if I wait long enough till the muon decays and now I go and I measure the energy of the electron, the energy of the neutrino the energy of the antineutrino I do the sum, I take the square I'm not going to get the muon mass but I'm going to get the mu star mass which differs by order alpha squared and the reason why I get a different mass is that the moment the muon decays it has a charge the charge is transferred to the electron the electron at the beginning is sitting here is sitting where the muon was sitting so it is embedded in an electromagnetic field now the electron has a lot of energy so it cannot stay bound around the proton because it has all of the energy that came from the decay so it goes to infinity where we do the measurement but to go to infinity it has to walk its way through the potential energy and if you calculate how much energy it loses as it goes out of the field of the proton that of course is exactly of order mu alpha squared and it matches precisely this alpha squared divided by two correction so if I measure the mass of the muon when the muon is bound around the proton I don't get the muon mass I get a different mass m star and now you see the tops whose mass I'm trying to measure are not objects which are in vacuum alone they are always embedded into something because they are being created in the context of the proton-proton collision they are interacting with all of the gluons that are around etc and that means that I'm certainly bound to measure something which is not exactly even if I could define the pole mass of the top what I'm measuring is slightly slightly different and this gives rise of course to a lot of discussions on whether the pole mass itself is well defined how much of this energy I have to absorb can I find the equivalent m star that m star that I introduced for the muon was well defined it included the potential energy and it was a perfect reflection of the kinematics of the muon decay products can I do exactly the same for the top can I define a mass of the top that already includes some part of the of the radiation field of the energy ok and this is what these other masses other than the pole mass do they reabsorb into the mass of the top all of these effects the problem with the pole mass is that it absorbs these effects all the way down integrating over wave length, gluon wavelength which are infinite, zero energy gluons and of course that is a problem but we don't know how to deal with infinite wavelength gluon because of confinement so we need somehow to be able to incorporate in the definition of the top mass effects that are controllable from the point of view of perturbation theory and then if you wish contributions to the energy that we can estimate and the others which are of non-perturbative origin we will have to deal with in a separate way so the bottom line is that there is an intrinsic uncertainty on the mass of the top which on what it is the mass that we are observing which is of the order of alpha s times lambda qcd which happens to be like the width of the top so but it's actually lambda qcd so alpha s the crucial point is whether alpha s is alpha s calculated at the scale of lambda so a number of order 1 or whether it's alpha s calculated at the top mass alpha s at the top mass is 10% so it's a small number alpha s calculated at the low scale is 1 so this will be a correction in the range of lambda qcd which is of the order of few hundred MeV so the conclusion is that while there is my conclusion is that while there is a problem of principle that this problem is confined to scales and uncertainties which are below the 500 MeV and in fact even less for those of you who are working on similar issues I just want to remind you that if you haven't seen it a few months ago a couple of months ago there was a very very important new paper new calculation completed by Steinhauser and friends he calculated the four loop contribution to the conversion between pole mass and ms bar mass so one can systematically calculate how to convert between pole and ms bar in perturbation theory this relation is something that in the case of a B mass it's a perturbative expansion but it's almost it doesn't converge each additional as the previous one which is an indication that in fact it's not it's not a well defined it's not a well the pole mass it's not a well defined mass but in the case of the top this series in fact is still now even now with the four loop calculation is still converging and now the uncertainty on the overall conversion is at the level of a couple of hundred MeV which indicates that at least down to the 200 MeV level there are no issues one small remark there is there are measurements coming out from the experiments occasionally we see them at the Tevatron and even the LHC is doing them on the difference in mass between the top and the anti-top now we know that the mass of a particle is equal to the mass of its anti-particle that's just the CPT it's a fundamental so there is no way well unless of course something fundamentally wrong with or some fundamentally beyond the standard model is at work we are not going to see any difference that is certainly true for the Tevatron which is PP bar because being PP bar indeed there is full symmetry between top and anti-top however when we do the measurement of the top versus the anti-top mass at the LHC it's not so obvious because the initial state is proton-proton the final state is quark-anti-quark so there is an asymmetry it's like the W plus versus the W minus we don't have a charge symmetry in the initial state so when we measure the mass of the top or the mass of the anti-top in PP in principle we are sensitive to fine details of the hydronization process when the top quark decays it goes to a B and the B has to pick up an anti-quark from the rest of the event in order to form a hydron if it were an anti-top it would go to a B bar and a B bar has to pick up a quark in order to become a hydron and of course the number of anti-quarks and the number of quarks that are available in the collision are different just because the initial state is proton-proton so there is clearly more quarks than anti-quarks and in the process of hydronization there could develop a slight asymmetry between the way the B jet from the top decay hydronizes and the B bar jet hydronizes and therefore if we do the separate measurement between the top mass and the T bar mass we might get slightly different numbers that would not be an indication of CPT violation it would simply be an indication of an intrinsic asymmetry in the problem and it is interesting because if that difference is measurable then since it is directly connected with the hydronization process it would be a wonderful probe we should be able to predict it when we do our shower Monte Carlo we should be able to predict what the difference is now all of the calculations I myself have done tell me that that number is very small I was not able to detect the difference between the top and the anti-top mass to better than the statistical precision of my calculations and we are talking about certainly below 100 MeV 50 MeV perhaps but still it's interesting and the measurement now is at the level of 270 the difference 270 plus or minus 200 plus or minus 100 you see it's dominated by statistics so certainly we go down to 100 MeV even this systematics is dominated by statistics so certainly in the future this test we should be able to push it down to the level of maybe 50 MeV or perhaps more and I think that that will be a very interesting exercise a couple of remarks QCD effects depend on how long the top actually leaves right I said the top decays and therefore it doesn't hydronize and but you know the longer it's around or the shorter it's around the more opportunities it has to come together because it needs to interact with the environment to hydronize the top, the B needs to interact with the environment, it needs this anti-quark to come in and one could imagine that depending on how much the top leaves this interaction with the environment will change because if the top leaves very long and only at that point it decays of course the rest of the event will have a time to develop more shower so there will be more soft gluons and more soft quarks and therefore when the B has to hydronize finds a given sample of objects with which it can hydronize. If it decays very early which it might then of course the rest of the event is still undergoing the phase of very hard radiations and the whole dynamics is different so whether this is something that has to be taken into account in the modeling it's something that one would have to explore and the other thing is that occasionally of course with a given probability that can be calculated and which is not negligible the top leaves long enough if you look at the lifetime of a muon you know experiments where the muons are kept in a storage ring for example experiments measuring the gemon as two of the muon the muons go on and on and on for a long time and occasionally with the probability of course e to the minus tau divided by the lifetime of the muon you still find muons and it could go on for ten hundred times for the top it's really a matter of surviving twice longer than it's lifetime because it's lifetime is one divided by 1.3 gEV the hadronization time is you know of the order of say five, six, seven hundred mEV so if the top leaves twice as long it will hadronize so there will be tops actually hadronizing and forming top hadrons produced at the LHC of course it's hard to tell them on an event by event basis and the question is whether the fact that the hadronize interferes with our modeling of the top decays in the context of reconstructing the top mass if the top hadronizes if the top leaves long enough we have a t-bar and if both the top and the anti-top leave long enough maybe we even have time to form a toponium state so a t-bar bound state and the t-bar bound state again the decay will be driven by the decay of the top but again there is some probability that it's actually the toponium that decays and goes to gamma gamma or goes to something else so there is tiny probabilities but very interesting new phenomena with very particular peculiar characteristics may take place and perhaps in the future there will become new alternative probes of the top dynamics so let me switch gears now let me say a few things about jets at very high energy there is interest in these because with the LHC now going up to 13 TeV it will be exploring jets all the way up to 4 or 5 TeV in transverse energy I discussed briefly yesterday the interest in studying the internal structure of jets because by looking at the internal structure of jets we could hope to understand whether it's a cork jet whether it's maybe a W decaying to hadrons and of course the higher is the energy the more everything gets confused because the jet gets squeezed of course the multiplicity gets larger so here are just some examples of some quantities ok for jets at 1 TeV 5 TeV and I put here 10 TeV of course these are not accessible but they will become accessible if we have a higher energy collider one day here for example it's the particle multiplicity this particle counts everything including the neutral so not just the charge but the neutral assumes a stable pi zero so that we don't count twice for the photons and this is a jet so this is the this is a jet we're looking at a jet with a cone of one so it's broad enough that it contains the top even at only one TeV so you see the difference right as we go up in energy for example the jet coming from a W going to two jets the multiplicity distribution remains exactly the same and that is because the W is a W right if it has one TeV or 10 TeV its decay is exactly the same of course it will be more boosted it will be more collimated but the number of particles is not going to change because in its own rest frame he doesn't care about how fast it's going this is Galileo's relativity this is not even Einstein on the contrary if we look for example at the top or just generic inclusive jets you see how broader the multiplicity distribution becomes that's because there is much more gluons there are many more gluons being being emitted and as we go to very high energy you see that the top while at low energy the shape of the top jet is very different than the one of the B jet because there is intrinsically much more multiplicity because there is 170 gV of energy that get released no matter what but as we go to higher and higher energy they become more and more similar and that's because at this point the mass of the top is negligible relative to the 10 Tv the additional number of tracks coming from the top decays is only a fraction of the huge look at the multiplicity here 200 right? 150 so that is a lot this is the shape of the multiplicity in other words I used jets of cone one and then we want to look at how this multiplicity how these tracks are distributed within the jet so what I do now I count the number of tracks within a cone of radius little r and I take the ratio you know the fraction of tracks which are contained within that small cone and again you see in the case of a w sorry it's not the fraction but this is the absolute number in the case of a w as we go to very high energy they are pretty much all within a cone of 1% r equals to 001 I mean it's really a tiny tiny angle and that contains pretty much all of the decay products while certainly for inclusive jets and for tops they are much more they are much more spread so by just looking at this observable it's immediate somehow the distinction between a jet that's coming from a w decay into qq bar to a jet which is an inclusive jet one doesn't need to do very sophisticated things now that was number of particles of course it depends how much energy the particle carries so another interesting observable is the energy shape what's the fraction of energy and this is now fraction of energy contained within a little cone and again look at the w right it saturates pretty much all of the energy of a jet I used a jet of radius 1 it doesn't matter all of the energy is within 0.0 something the details of the shapes I mean one can sit down look at them they are qualitatively reasonable and this is the mass distribution now of course the mass distribution for the w should be exactly the w mass well it's not in this case because this is really the full I'm working with a full proton proton event goes to a w plus a jet so there is initial state radiation there is other particles going around and of course to the extent that I work with a cone of radius 1 there will be additional particles coming in and that's why I develop a tail but clearly the peak is at exactly the w mass so what is done typically in the experimental analysis is to put together to put together all of these ingredients where perhaps case by case there may not seem to be great differences between a very high pt top and a very high pt b jet or inclusive jet but even slight differences when we pile them up we take a product of likelihoods of different shapes we can build discriminants which turned out to be useful in reducing backgrounds or isolating signals and this is a very interesting game it has become a branch of LHC phenomenology since a few years now I have few more discussions here and let me since I don't want to go much longer I want to leave at least 10-15 minutes for possible discussion at the end let me come to one topic that I think is interesting we are talking about high et jets and there is another interesting thing that happens when we are dealing with very high multi-tv jets and that is that the radiation of w and z bosons start to become a non-negligible effect typically if we look at let me start from the end in fact let's look at this table I don't know how much how visible it is so what I have here is cross sections for production of several bosons at 13-tv so this is the w cross section the z cross section w w cross section 4 w's 5 w's whatever and now let's look at what's the probability of emitting an extra w so the ratio of w w over w cross section is 5.5 0.6 per mil 3 w's over 2 w's is 2 per mil 4 w's over 3 w's is 5 per mil so we are really talking about few per mils that there is an alpha weak divided by pi and adding a mass of course costs in terms of how much energy we have to put in the system so the bottom line is that we lose a factor of 10-3 every time we add an extra w we see that this number becomes larger here this is exactly the same reasoning as we did in the case of the jets we add w and we want to create 2 we have to double the energy required if we have already 3 w and we want to go to 4 we only have to add 30% extra energy so the penalty in terms of alpha of alpha of couplings is more or less the same but it only costs a little bit more so that's why it becomes more likely the other interesting thing is that if we have several w's emitting one more means that that extra w can couple to each one of those w's or z's and the coupling of w's among themselves is quite strong so there is an advantage to w's coupling to w's rather than w's coupling to quarks that we see for example in this line here this is the ratio between the cross section to produce a w and the cross section to produce a z and this number is equal to about 3 if we put in leptonic branching ratios say z goes to a plus and minus that's about 3% w goes to electron neutrino that's about 1, 9, 11% so there is a factor of 3.5 so 3 times 3.5 it's about factor of 10 so the cross section for a w that goes to electron divided by a z that goes to a plus and minus about 10 this number 10 which is easy to remember it's doubly easy because it's not just specific to the LHC at 13 TV but it's been true in the whole history of Hadron colliders that number was measured as being I say 10 and it's 10 plus or minus 1 plus or minus 0.5 but just as an order of magnitude that number is a constant it was 10 when the w and the z were discovered at the super proton antiproton spp bar s at CERN it was 10 at the Tevatron and it is 10 at the LHC at 7, 8 and 13 TV and that reflects this ratio 3 simply reflects the strength of the coupling of the w and the z to the initial state quarks to the UND bar so it's just the weak couplings of the z to the UU bar and the DD bar it's slightly less than the coupling of the w to U2 bar so that's what we see here as we go to having more and more w's and z's around for example if we look at the ratio between w, w and wz so we have an extra w you see that from 3 we drop to 2 if we look at 3 w's to which we add a w relative to 3 w's if we add a z this ratio now is 1 and it's 1 because at this point adding the w or the z we are more likely to do it adding them to the final state cage bosons as opposed to to the initial state so what we are seeing with these ratios going to 1 is that the w or the z are pretty much the same object so in a sense as we go to higher w multiplicities we are restoring the SU2 symmetry we are just talking about the couplings among w's and z's among themselves the quarks are pretty much idle in this process it's much more likely to add extra bosons to the final state than to the initial state so this evolution reflects here the coupling to the quarks and down here is the quote unquote restoration of SU2 symmetry between the gauge bosons anyway so we started by saying it's 10 to the minus 3 now let's see what happens when we go to very high energy jets final state in which there is very high energy jets can emit w's with much larger probabilities and in fact let's look at this figure this is the probability, well this is the ratio for example of jet jet plus w divided by jet jet as a function of the ET minimum of the jets in other words we take final state with two jets and we require the leading jet to have ET larger than a given threshold one TV, two TV, three TV etc and in these events where we have two jets of which one at least is above say 3000 GeV I go and count the number of w's that I find in these final states and what I find is an number which already 3 TV's of the order of 10% and it goes up to 15% at 5 TV so the LHC will be probing jets with transverse energies up to 4, 5 TV and in each of these events not in each but in 1 out of 8 of these events there will be a w so we go from the 10 to the minus 3 probability of emitting a w in a generic event to minus 1 it's an increase by a factor of 100 probability of emitting a w in an event in which there is high energy jets and that's because there are of course there is a huge we're already putting so much energy right into the system these 5, 10 TV but adding a little w doesn't cost much in terms of energy so that develops a very large logarithm that contributes to increasing the rate of course these w's will be relatively soft right because they also have very, very high energy there of course it would cost a lot of energy so typically the configurations are configurations in which there is 2 jets back to back and then a very wimpy soft w or z being emitted of course this is interesting because it can contaminate searches for bsm physics interesting phenomenology one could use for example if for whatever reason above 5 TV all of the jets were gluons then of course there would be no w produced right this is because we have the right amount of quark jets if we look at the resonance let's say we have 8 TV resonance that decays to 4 TV jets we see the jets but we don't know gluons or quarks because it's a resonance and it could be it's no physics it may be coupling to quarks or to gluons if we see the systematically all of the 2 jet events coming from the decay of this resonance there is no w being emitted no z being emitted it might mean that in fact it is gluons if we see w's being emitted and z being emitted it will happen with the probability of the order of 10-15% if it's standard model couplings maybe more if it is couplings which are stronger than standard model and one can use the rate of w versus z to understand more precisely what is the nature of these particles that come from the decay of this new resonance so all of these things you see were completely unconceivable until a few years ago because we just didn't even have energy but there was no energy even at 8 TV let's see there was not enough energy to start exhibiting this phenomena and these are the things that now we have to deal with in the context of very high energy colliders and typically the right way of looking at this phenomena is as opportunities rather than as problems it's problems because it means that there are more things that one has to take into account there are more things that one has to calculate there are more effects that one has to account for if one wants to make precise predictions but at the same time each of these the diversity that emerges from this phenomena is an opportunity for further exploration so I have a few more things but I think that yeah I'd rather stop here I thank you very much for your attention we have still 10 minutes I would say for discussion so I really hope that we will be able to ask possible final questions before I go thank you