 Good afternoon everyone. So the title that's been assigned to me is QCD for future Hadron colliders. I put future here in brackets because of course QCD is QCD. So at least during the first part of my lectures I will be dealing even with Karen today's Hadron colliders. That's where we will learn the topic and then towards the end of course I will present some results that are specific perhaps to to the future. So QCD of course stands for quantum chromodynamics. It's the theory of strong interactions and contrary to say electron-positron collisions where in principle there is a large class of phenomena that can be completely analyzed and understood in terms of quantum electrodynamics. Everything that happens during proton-proton collisions, so Hadron collisions, requires some understanding, the understanding of the proton structure and of interactions between quarks and gluons. So these interactions are described by the gauge theory of quantum chromodynamics. It has an SU-3 symmetry group. The charge is called color and the objects on which the interactions act are the quarks, the matter fields, sitting in the fundamental representation and the force carriers, the gluons, are of course in the adjoint representation of SU-3. The two key features of QCD are confinement, which is related to the linear growth of a potential as two charges, two colors, get separated from each other. So there is a constant force at large distance and a syntotic freedom that says that when, on the contrary, we bring two color charges closer and closer the coupling constant decreases. Decreases to the point that in fact we can treat at least as an approximation partons, quarks and gluons, as free particles. We don't have to deal with the degrees of freedom which are the Hadrons any longer. Now, in particle physics QCD is a bit what the electromagnetism and quantum mechanics are for chemistry. So we find it everywhere. Of course, if you want to describe the Hadrons, their spectrum, their transitions, the decays. If we want to understand even the electron-weak properties of quarks, say the decay of heavy quarks, for example, we do need QCD to understand the proton structure, to understand final states created in E plus and minus collisions. So it starts as being a purely electromagnetic or electro-weak process, but the moment we create quarks in the final state automatically from their own QCD takes over and is needed. So wherever we look around in particle physics QCD enters. Now this is not a course about QCD. It's a course of QCD in the context of Hadron interactions. So there are a few things that I need to give for granted and somehow I've been confirmed by the organizers that I can afford doing that. I give for granted that you do know what quarks and gluons are. So all of my speech up to now you more or less understood what I said. You know what are mesons and baryons. A symptotic freedom. You have some familiarity with the Feynman diagrams, Feynman rules. I will not be using much of it, but unavoidably some of the results will arise from, I will certainly draw Feynman diagrams and you should know more or less what those Feynman diagrams are represented in terms of equations. And then I also assume that you have some basic knowledge about what happens when we have Hadron collisions. What is the interest in the physics of Hadron collisions? Why we built the LHC? Namely that you know that Hadron collisions, the objects we study are for example jets or heavy quarks, top quarks. We produce and we study W bosons, Z bosons, the Higgs boson. I will not be saying anything about Higgs bosons because that will be the topic of the Professor Peskin lecture. And then of course we use Hadron colliders as being the highest energy accelerators we have in the laboratory to search for all of the phenomena beyond the standard models of supersymmetry, dark matter, etc. Now the outline of my lectures is the following. The first two lectures will be mostly introductory and the first one I will focus on what I call the understanding of the initial state evolution, which means what happens as the two protons come closer and closer to each other. Before the hard process I will discuss factorization and PDFs, which is the part on distribution functions, namely its functions that describe the densities, the probabilities to find quarks and gluons of a given configuration inside the proton. And I will discuss a class of observables which are related to the Drellian process for which the only thing we need to know is indeed the initial state evolution and PDFs. Tomorrow I will focus on what happens in the final state, namely once we created, once we had the hard process we created particles, strongly say quarks and gluons in the final state. How does the system evolve towards the mesons and the variance, the stable particles which are experimentally detected. And on the third lecture I will start illustrating a few selected phenomenological applications. For example in the physics of top quarks, how we measure the top quarks in hadron collisions, various results on jet physics, on electroweak physics, any particular. So a review and an interpretation of the data that are coming out of the LHC. And then in the last lecture I will slowly move towards higher and higher energies which is the domain of the future hadron colliders. Everything okay so far? Everybody's familiar, I mean agrees that this is a reasonable, someone who never heard of the LHC or has absolutely no idea what hadron collisions are. Anyway it's clear to me because I know already for example some of you, others I don't, but from experience in this school I know that there is a very diverse composition. So unavoidably some of you will find two elementary what I say, some of you will find it perhaps too difficult, I will try to keep a balance. And anyway if you give me feedback during the lecture, at the end of the lecture, we can certainly have conversations in private. And then I can try to implement the feedback in the next few lectures and I'm also willing to change completely the outline if there is a desire to do so. Okay so let me start with the real subject. The factorization quote-unquote theorem is the fundamental starting point for the description of all processes taking place in hadron collisions where we probe at very short distances physics. The vast majority of phenomena that take place when protons come together collide with each other in fact cannot be understood in the framework of the factorization theorem. And in fact cannot be understood in the framework of perturbative physics or perturbative QCD because for example we have large angled collisions between protons with a large separation. So protons act as particles themselves. We are not exposing the quartz and the gluon degrees of freedom and the total cross-section is simply a reflection of long-distance non-perturbative dynamics of QCD but it's non-perturbative and therefore it's not understood in the simple term. So whatever I will say will focus on the perturbative elements and therefore it's of specific interest to production of processes at short distances. So production for example of very heavy particles like the top quartz, the WMBZ boson, the Higgs or jets. So in this context factorization is a statement about any cross-section. Here X is some observables built out of observable objects. It could be the invariant mass of a pair of particles. It could be the energy of the particles. It could be some angular distribution for example between particles. So the sigma by dx is given by a convolution of three objects. We start from a description of the initial state. These functions f, j, f, k, there is one for each of the two protons describe the density of the particles of type j. So this could be quarks of a given type, up quark, down quark, so it could be gluons. Carrying a momentum fraction X, momentum fraction of a proton, longitudinal momentum fraction of a proton X at a scale Q. What this at the scale Q means I will describe in a couple of slides from now. The time being the idea is that we are starting our description from objects which give us knowledge about how much momentum an individual parton carries inside the proton. And we are working in a frame in which protons are super relativistic so that the transverse motions inside the proton are completely negligible compared to the longitudinal momentum. So these f's are called the parton distribution functions and they should be seen as the sum over all of the possible histories of the initial state that lead at a given scale to a parton of type j carrying momentum fraction X. In other words, if I select say a quark with momentum fraction 0.1, 10% of a proton momentum, I can obtain that in many different ways because I still have 90% of a proton momentum to account for. And that 90% can be shared between the other quarks and the gluons in many different ways. So the only thing that I'm interested in, I don't care about what the rest is doing, who's carrying how much momentum, I'm just interested in knowing how much, what's the probability or how many quarks there are carrying 10% of a proton momentum. So it's a sum of histories of initial state histories. So having isolated for both protons two partons, I just look at all of the possible interactions, hard interactions or interactions based on point-like forces which could be QCD, for example, but it could also be electro-weak interactions or possible interactions from physics. So I look at all possible interactions described by a partonic cross-section, so this is a cross-section sigma hat between quarks and gluons, not between protons, leading to, again, partonic final state, which here I call, well, it doesn't even have a name, it's called F as a final state. I call it X hat. And then I look at all of the possible evolutions of this final state, so the evolution from the system X hat, defined in terms of partons, to a system based on hadrons, which reproduces my observable X. And I integrate over all of the possible, all of these possible final state histories X hat. So this is called, typically, a fragmentation function but could also be simply unobservable for instance, a fragmentation function, because if I want to look at a particle carrying a specific momentum, I can obtain that by generating a particle in this scattering, having a very large momentum, and then it loses energy by emitting radiation and at the end is left with a fixed amount of energy. So in that sense, it's a fragmentation process. And I sum, therefore, over all of the possible processes inside here that can give rise to this final state X. And this is a convolution in the sense that you see these Fs don't carry any index about what is the hard process that I'm interested in, what is the final state that I'm interested in. These are intended to be universal functions. Here I don't say I take this F if I want to create two jets or if I want to create a GT bar pair, I'm going to use exactly the same F. F is associated to work with a given type. But that F will not change. Likewise, this evolution for the final state is independent of where we came from because it only depends on what are the states that are coming out of this hard process. It doesn't matter if this pair of objects was produced with a pair of gluon or an anti-corp pair. What counts is the evolution of these final states. And the place that carries the information about the hard process we're interested in is this part on level cross section. So it's absolutely not obvious that one can factorize in such a way a quantum mechanical process because quantum mechanics, as you know, we cannot say up to this point something happens and then something else is happening and then something else because there is always long range interactions that connect the different time scales of the process. The only thing that counts in quantum mechanics is an initial state at times t equals to minus infinity and something at times t equals to plus infinity in between everything mixes. In terms of Feynman diagrams, what that means is that, for example, I can certainly imagine a diagram in which I attach a gluon coming from the initial state and connect it to the final state. The moment I attach a gluon from the initial state to the final state, there is no way I can factorize the two because that introduces a coupling between the two. So this statement is a statement about the fact that all of these correlations tying the initial state, the evolution of the initial to the evolution of the final states can be considered as being sub-leading. They are not dominant. In fact, the factorization theorem tells us that we can do this factorization up to corrections which are so-called power corrections. Corrections that go like one divided by s-hat divided by the energy of the hard process and in the numerator we have a quantity of the order of, say, GVs for instance. So terms which are very much suppressed. Now, I will now spend a few minutes. In fact, it's a good fraction of a lecture in trying to give you certainly not a mathematical proof of that statement, but more an intuitive picture that hopefully will help you assay that this factorization statement is correct. And in between, we will learn a few things more about those PDFs and hopefully you will get a more physical sense of what happens when two protons collide. So let me start by just looking at a single proton and let's take this naive picture of a proton as being an ensemble of three quarks bound together. And let's ask ourselves the following. What is the contribution to the proton structure to holding these quarks together inside the proton that comes from the exchange of gluons of hard gluons, namely gluons with a virtuality larger than some given scale, some given large number? Diagrammatically, this means the following. I have these three quarks. To stay together, of course, they have to exchange something. The exchange is gluons. It's really the gluons that hold, it's like photons holding the proton and the electron together in the hydrogen atom. And a gluon gets exchanged and a gluon is given back because otherwise, of course, the system would inherit some transverse momentum of the quark. So the typical situation we're dealing with is a loop diagram in which gluons of a given virtuality with momentum Q are exchanged. And if we're interested in looking at the effect of gluons above a given scale, capital Q, we just calculate this diagram. There is a loop, so we have an integral in V4Q. There is a denominator, which is at least Q to the sixth power. There are two powers here, two powers here, and then one power each on these sides. It could be more, in fact, if there is some... But there is at least that. And we're integrating in a range where Q is large. So this integral is finite and it goes like 1 over Q squared. Now, this, of course, is true because underlying that there is the assumption of asymptotic freedom. In other words, as Q becomes large, which means short distances, the theory is weakly coupled. If the theory became strongly coupled when Q becomes large, of course, we wouldn't be able to extrapolate this integral up to large momentum. Now, if we want to interpret these in terms of, say, a probability, a probability of very hard gluons contributing to holding the proton together, we need a number. And the only dimensional object that we have in this context is, say, lambda QCD or the mass of the proton. So what this is telling us, so we get the number of the order of m proton divided by Q squared. And that means that when Q is very, very large, the contribution coming from hard gluons is actually very small. So it's negligible. It's not the hard gluons that are holding the proton together. So the processes by which the proton is held together have a virtuality which is comparable to the proton mass because that's what makes this number over the one. So a process, a gluon with a virtuality of the order of a proton mass has a lifetime which is of the order of one divided by MP. That's just the uncertainty principle. If we're looking at the frame, so in the laboratory, and we have a proton, of course, with a very large boost like we have at the LHC, then this lifetime gets enhanced by a relativistic factor. So in fact that the time scale is of the order of the energy, so gamma divided by MP. So this is the time scale for the exchange of gluons that will contribute holding the proton together. Now, if we go inside the proton with a hard probe, which means a probe, an object, that has a very, very short lifetime. It could be, for example, a virtual photon. We can create a photon very much of shell. It will have a very, very short lifetime because of uncertainty principle. So by knowing how much that photon is of shell, we know what is the time scale of interactions that it can have with the inside of the proton. So if we have a hard probe with a virtuality much bigger than the proton mass hitting the proton, on a time scale one divided by Q, which is the time scale in which things have to be settled, there is no time for quarks to negotiate a coherence response. The struck quark receives no, I say here, no feedback from the other quarks and acts as a free particle. In other words, we have this probe coming in. It leaves for a time one over Q, which is very, very short relative to the time it takes for quarks to communicate, to exchange gluons that are going to hold the proton together. So that proton has to react, that quark has to react, but it doesn't have time to send off gluons holding the proton together to the other quarks. So it has to respond by itself. It would take too long to negotiate a coherent response of a proton as a whole to the arrival of this very, very short lifetime probe. In fact, this may happen, we go back to here, but then it will be power suppressed. It's only at the level of Mp proton mass squared divided by Q squared that we have a probability of a proton actually reacting as a whole. And when Q becomes very large, that probability is very, very small. So one has indeed the possibility of having elastic scatter in the proton reacting as a whole to a very high energy probe that that is very much suppressed. And the rest, that's not going to be the dominant way in which the proton reacts. The dominant way in which the proton reacts is the single quark that's being struck by this external probe having to take care of its future life by itself without talking to the rest of the proton. So this is what we have then. So here is our external probe and here is the single proton having to react. Now, in the history of any particle, of course, there is a continuous emission and reabsorption of radiation. That's true of the electron emitting and reabsorbing virtual photos, and it's true, of course, emitting and reabsorbing virtual gluons. So if we want to look at the so-called radiative corrections, higher-order effects in the interaction of our quark with the external probe, we can consider, for instance, the emission of gluons of wavelength, or if you want, of virtuality which is even bigger than the external probe. So these virtual excitations of the quark live even shorter than the time of interaction with the external probe. Now, you see these things don't really do much. First of all, since this is a very, very high q-square, even higher than the external probe, this is calculable in perturbation theory. So it's not something that can be complicated, but it's not unfeasible, because it's perturbative theory. Number one, number two, we don't really affect the probability of finding a quark carrying a given momentum, because before and after the emission and reabsorption of these objects, the quark still carries exactly the same energy, because it emits and reabsorbs, so it's still in the same state. So at best, this is, say, an overall renormalization, but it's not a physical, it doesn't have great physical consequences looked at in this perspective. If we look instead at the spectrum of these gluons being emitted and reabsorbed, and we look at the spectrum of virtuality smaller than the virtuality of external probe, so this gluon, it's a very soft gluon, it's emitted, it leaves off some time, and then it has to be reabsorbed by the quark, but if during this time, which is relatively long because this is not much of shell, an external probe comes, once again, the quark has to react to this external probe in a very short time, and it doesn't have a way to communicate to these gluons, hey, look, I'm being hit, I have to go away. And that means that this gluon will be, will find itself without the quark to get reattached to, so it's being lost. Now, this situation is a bit more problematic than the previous one. On one side, because as you see, the momentum fraction, the momentum of the quark, now it's different because it emitted some gluon, it gave some energy there, it was supposed to reabsorb it, but in the meantime, it's been kicked away, so when it got kicked away, it had a different energy because now this gluon has been liberated and it's an actual physical gluon, well, it's not a gluon, it's never physical, but now it's not virtual any longer, it's free, we have to, itself, find a way of coming out of this collision. Number one, number two, now we are talking about momentum which are smaller than some large momentum. So here, in principle, we could go all the way down to q equals to zero with the emission of very, very, very long lifetime gluons, and then we get into a regime where at large distance, long time scales, the theory is not perturbed because it becomes strongly interacting, there is confinement, so we cannot calculate the effect of these gluons in perturbed QCD. And this is the key to the understanding of universality and factorization because it's true that we cannot deal with these objects, nevertheless, we can't precisely because they are associated with long lifetimes, they've been emitted long life, long time before the hard process itself. Having been emitted long time before the hard collision, the emission itself cannot depend on the details of what the hard collision is. They were just emitted before, they didn't know that at some point there would be this photon coming in and disappearing in a snapshot. So the properties of emission of these gluons are independent of the hard process and this is where the universality, so we may not be able to calculate it, but nevertheless, it is a property of the distribution of quarks inside the proton, namely the distribution of quarks at a small, virtual at a small scale mu, is a property which we can infer from measurements, from experiments because it is universal. So it all boils down to the existence of different time scales and the fact that when we are looking at very, very long time scales we are being sensitive to what happens on very, very short time scale. It's a bit like what we do in molecular physics with Born-Oppenheimer approximation when we have a molecule and we have a different time scales related to the motion of the atoms within the molecule and the motion of the electrons within the atom. We have time scales which are completely different so we can decouple them and we can solve, deal with the two issues separately. Now, looking at the picture I just gave more critically, immediately one finds however that indeed these little f's cannot be so absolutely universal because I said I had this external probe with a scale q, if you are above q something happens, if you are below something else happens. So where I put that q in principle could lead to different things happen. So let me be more explicit. Let's now take this external probe. Mu here is the virtuality of this gluon. If we gave it enough time it would be absorbed by the quark and in particular if we come in with a probe there's a virtuality q smaller than mu there is time for the gluon to be reabsorbed so we find the quark carrying exactly the same momentum as before. If we now instead come in with a probe with a higher virtuality then we intercept the quark before the gluon is reabsorbed and we find the quark with the momentum x which is different than the initial state momentum. It's x equal to x initial state times y where y is a number between 0 and 1 so we've lost some energy. So as you see there is a dependence on q the larger is q the more gluons will not have time to be reabsorbed. So these pattern distribution functions must depend on q. I have to specify which q I'm looking at the quark or the gluon inside the proton to know how many to expect. If I go to q higher and higher and higher phenomena like these will happen if a gluon has been emitted then we'll not get a chance to be reabsorbed. Now this process of energy being lost and being gained can be described as a almost as a chemical equilibrium process because energy and particles are conserved so if we lose energy through a gluon we can certainly account for it and therefore we can write this equation that tells us given a density of say quarks at a given momentum and at a given scale the density at the same momentum fraction at a different scale is obtained by adding all of the quarks that have momentum larger than x so we are integrating x between x and 1 having an initial momentum larger than x at the scale mu times the probability that they emit a gluon taking away an amount of energy such that at the end of the day and this is the delta function x is equal to y times x initial. So if we're looking at quarks with a given momentum fraction we go to higher q square of course we have to look at quarks that had a larger momentum fraction lost some energy this energy was not recuperated because that emitted gluon didn't have time to rejoin and then we have a contribution to the F and then we integrate over all of the q squares between mu and q so this is just a detailed balance equation if you want from chemist now that equation exactly as it was in the previous slide without modification I repeat here to continue the argument and now you see this scale mu I mean what I'm interested in really is F at the scale capital Q and to build this argument I introduced this intermediate scale mu which I made reference to but of course if I chosen a different scale mu I would have to end up with exactly the same function F at the scale q because again it's a matter of balance so it doesn't depend what's the scale mu I used to build my argument what counts is the relation the way in which energy flows in and out and that statement amounts to saying that if I take the absolute total derivative of F relative to mu square F of q relative to mu square has to be equal to zero F of q does not depend on mu so if I take the total derivative it must be equal to zero and you see mu appears in several places on the right hand side so what happens is that the partial derivative of F with respect to mu plus the partial derivative with respect to mu of this integral which amounts to simply dropping you see if I take the partial derivative with respect to mu it just hits this lower index so I don't have the integral in q square any longer and I get this relation the derivative of F with respect to mu square is equal to the integral to the convolution between F itself and some function P which describes the probability of the quark emitting this glue and so far we didn't have to do any calculation because it's just writing down things as they are what requires a bit of calculation is extracting from qcd what this P is one thing that we can easily guess out of dimensional analysis because you see F's are numbers Y is a is a fraction so is a number so there has to be a mu square in the denominator so P has to go like 1 over q square there has to be an alpha S in front alpha S being the coupling constant square because this is describing the mission of a glue one and then there is something else it doesn't depend on alpha doesn't depend on q square that's what we call P of X and it's called usually the D-Glapp kernels or splitting functions from the people who introduced them in different contexts over several years so the D-Glapp equation is therefore phrased in terms of the logarithmic derivative with respect to the scale of a part on density as a convolution of the part in the splitting function now there are other processes that can contribute we just describe the probability that the quark emits a gluon and goes towards the collision but of course the quark could have come from a gluon itself the moment I generate gluons inside the proton the gluon can split into a q-q bar there and when the photon comes it will hit the quark so one can redo the exercise yourself and that means that there will be a contribution to the D-Glapp to the evolution equation of the quark density that will be proportional to the gluon density so it's the gluon density times the probability that the gluon split gives rise to a quark likewise I may have an external probe which is not electromagnetic but which itself is strong say a gluon for example and the gluon couples to the gluon as well as coupling to the quarks and therefore I will have an evolution equation that includes the gluons in the initial state or quarks emitting a gluon and in this way one builds this system of evolution equations one and two and the explicit forms of these splitting functions or these D-Glapp kernels at leading order at order alpha s in these equations here this is of course what requires a bit of algebra we're not going to do it it's not especially for the quarks and gluons it's relatively straightforward calculation but we're not going to do it I have few slides on the origin of the logs but I just leave them there for you for those more advanced of you to look at it's a bit technical and I want to move right away to a small example of how this works and now we're going to do something absolutely fantastic we are going from first principles to be able to calculate in fact how much charm quark there is inside the proton now the density of quarks inside the proton as I said before it's a purely non-perturbative object because we have to go down to very very low scales to start extracting this information but there is and therefore it would require techniques like lattice for example incidentally lattice so far has never been able to extract information on the pdf so we really have no even though we would know how to formally write down the equations and the process to extract the up quark density or the gluon density inside the proton the calculation is too complex for anyone to have been able to do it from first principles on the other hand we do have quarks in nature which are heavier than the scale at which strong interactions become non-perturbative already the charm quark has a mass of about 1.3 it's over 1 gv 1.3 gv and of course if we look inside the proton as a static object there is no way that we can find a charm quark there is no charm charge associated with the proton so if there is a charm quark there has to be an anti-charm quark as well inside the proton and the sum of the masses of the two by itself is 3 gv so there is no way we can put this charm inside the proton but if we start looking at the proton at very short distances of course we are going to liberate all of these virtual fluctuations and therefore we will find on very very short time scales a charm quark because on short time scales the accounting of energy as you know the uncertainty principle allows us to probe states in which apparently energy is not conserved to the extent that we have a very short time available so there is a chance that indeed we can calculate from first principles in a perturbative way the number of charm that we see inside the proton if we probe the proton at a given scale and the way we do it is by applying exactly this D-GLAB equation you see in order to get a charm the charm we have to come is not there to start with it will come from a gluon splitting into a cc bar we go to short distance we have a virtual excitation of a quark and if we go to even shorter distances the gluon itself among all of the possible things it can do it will split into a cc bar so we can describe therefore the evolution of a charm quark density relative to t t here is the log of mu square so this is the evolution variable of the D-GLAB equation and it's exactly the D-GLAB equation that we wrote before now it's the part proportional to the gluon density times the splitting function that describes the probability that the gluon splits into a quark carrying a given momentum fraction of the gluon itself now these to proceed if you want to do this explicitly we need to know what is the gluon density and the gluon density we can approximate with say one over x because the gluon is radiated as a frame-strahlung process so it's a bit like in electrodynamics it's an electron radiating photons and the spectrum of photons goes like one over the energy so it's a reasonable approximation to have the gluon density behave like one over x for what concerns the splitting function you read it out of a previous of a previous slide it's particularly simple it's x square plus one minus x square so you put these two things together these two you plug them into the equation you do the integral and you do analytically because it's a very simple integral and the result is that dc by d log mu square is proportional to you see a over x is the gluon density again times alpha s over 6 pi so this is the differential equation now if we neglect the fact that the gluon density itself depends on q square we just have a number here so we can easily integrate these and the t of course will give rise to a log so the charm quark carry momentum fraction x at scale q is equal to alpha over alpha s over 6 pi times the gluon density times the log of q square over the charm mass square you see this is a differential equation it requires a boundary condition and the boundary condition is that when q square is below the mass of the charm q square the virtuality has to be at least as large as the charm mass in order for us to have this virtual cc bar excitation so that is the boundary condition the thesis that tells us that below a given scale there is no way we can possibly see charm inside the proton above a given scale so at shorter and shorter distances it will come up and it will come up in exactly this way so the charm is proportional to the gluon times a log times alpha over 6 pi so it's a very simple calculation and one can use now this simple formula just for the sake of understanding how far off this is from reality and compare it against actual real numerical solutions of the big lab equations than using input from data of course than solving exactly the evolution equations putting in all of the sophistication all of the that we can afford and this is what we get so in the case of a charm quark the solid line is the real pdf the real charm density coming from the numerical solution of the full system of evolution equations for all of the flavors and gluon inside the proton while the diamonds are our expression alpha s over 6 pi times g of x times the log and this is done for various scales 10 gv, 100 gv, 1000 gv so it's all scales above the say 1.5 gv which is the threshold and as you see as a function of x so this is x so 10 to the minus 4 minus 3 minus 2 minus 1 and look at how good the approximation is when we are at around 10 gv the approximation becomes a bit less good as we go to higher and higher q squares simply because in the previous derivation we haven't taken into account the fact that the gluon density itself evolved so there are higher order terms I told you let's assume that the gluon density does not depend on q so that we can easily solve the differential equation by multiplying by log, right? If the gluon density depending on q of course when I integrate these I have a function which is more complicated than the log and that's why as I go higher and higher in q square I start seeing deviations but it's remarkable that close to threshold 10 gv the agreement is actually very good and I can do the say I did it for the charm I can do the same for the b and for the b the agreement is even better because as I go to higher scales it takes longer somehow before I start deviating so it's quite exciting I mean within a factor of 2 with just a simple back on the envelope calculation we can learn about something of a proton structure that otherwise it's quite complicated, okay? Any... I should have said at the very beginning you know interuptly anytime there is something you don't understand etc. say it now and I take a break for a second to see if... yes question sorry? when q square becomes large alpha s becomes small and therefore he's confused by the shape yes because what he's saying is that how come I get more charm which is proportional to alpha s as I go to higher q you get more charm because as you go to higher q the glue on itself becomes stronger because the glue on evolution is driven by the q square evolution think about that what we said is the picture we have let me go through this this is x this is x this is x these are 1, 2, 3 these are 3 values of q square and this is x okay you understand why there is more and more gluons as we go to higher q you understand that right because if q becomes larger then the glue on that split it doesn't have time to reabsorb the emitted gluons so we find another gluon floating around then the higher is q the more gluons have an opportunity to actually split so the multiplicity of gluons becomes larger and larger as q becomes larger and since from each gluon I have the probability of finding a cc bar the charm multiplicity has to grow as well right so as to a company we will see I have in fact plots that deal exactly with this any more questions yeah because it's a small x is the place where the gluon for example is better approximated by 1 over x when x becomes 1 we know that if you take 1 over x and x goes to 1 you would find gluon equals to 1 but we know that there is no gluons at x equal to 1 so in that region is really the estimation of the gluon going like 1 over x which is not very accurate the question was how come it's mostly here that there is a good agreement while here of course we are off by factor of 2 perhaps more questions where is the dependence on the proton which dependence yes don't depend on the proton mass yeah it doesn't you're right so it doesn't depend on the proton mass in fact this could be a neutron it could be a pion there is in this equation charm equals to alpha times gluon times the log there is no knowledge of what the hadron is it could be a pion that will still work the actual number here if you read out you know 3.5 or whatever number that depends on this being a proton because I put in a very specific value for the gluon so that gluon in this equation is not the gluon 1 over x but it's the gluon, the real gluon so I take the actual calculation of the gluon inside a proton from the evolution of the full system the experimental data on the proton and I extract the gluon and once I have the gluon I do that I obtain the charm by multiplying the gluon times the log times alpha s over 6 pi so it's implicit in that but these are the relations between the charm and the gluon that is valid also for a pion for a kaon for anything okay it will not be true for a D meson of course because in a D meson there is a charm as well as right so that will be different any more questions? okay, so we continue and now I have another set of slides which are a bit more technical and again they're here for those of you who want to look at it I will briefly comment on it so you are guided in going through the slides here I introduce the concept of the moment of a parton density which is the integral of a parton density 0 and 1 weighted by x to the power n and the reason why we do that is that by doing this the D-Glap evolution equation turns from an integral differential equation into a simple differential equation for the moments so it's a system, it's a couple system of differential equations for the moments which in various approximations can be solved exactly and can give interesting results so this is just trivial algebraic steps that are outlined and there is a definition of what we call the valence namely the difference between quark and anti-quark and the c which is the sum between the density for a quark and the density of its anti-quark there is valence some rules the flavor of the proton for a given flavor is just the integral over all of the quarks minus anti-quarks carrying that flavor and therefore it is expressed in terms of the first moments of the densities so the valence sum rule is related to the first moment and it gives constraints on the splitting functions it allows in particular to regulate there is a momentum sum rule which is the second moment of the proton densities represents the amount of momentum carried by a given proton species and the sum over all of the proton species now here while in the valence quark and anti-quark cancel each other because they have opposite flavor charges and of course the sum with each other because both quarks and anti-quarks carry momentum so from here we get again interesting constraints and one can solve analytically for the pdf evolution in moment space and out of this for example one can extract the usual asymptotic behavior of a gluon of a gluon second moment which is the fraction of momentum carried by the gluon that asymptotically at very very high q-square is equal to 4cf divided by 4cf plus the number of flavors cf being four thirds so this is the standard result that a very high q-square amount of momentum carried by gluons is a constant and again it's independent from its true for any other so let me show some examples of pdf evolutions here in the upper left corner we are looking at valence up quarks so these are the valence means the quarks that are there even at very very short even at very small q-square so at q equals to ggv so we are very close to having looking at a proton at a scale comparable with a proton mass you see that the distribution of up quarks peaks at about 0.2, 0.3 0.2 which is reasonable because we have two up quarks we have one down quark there is gluons so each of these the gluon carries about 50% of a proton momentum so what's left has to be shared between up quarks and down quarks so the average of the up will be in the range of 0.3 and it's reasonable as it's seen here as we go to higher q-square 10gv, 100gv, 1000gv this is how the distribution evolves and as you see what happens is that the peaks becomes smaller and we develop a tail at smaller x and the reason is the following that if we take a quark it carries a 20% of a proton momentum we go to very high q-square and it will be losing a lot of gluons there will be many gluons but it's a meeting and will not be reabsorbed so they will be lost so as we go to higher q-square of course it will be less likely to find the quark carrying a large fraction of a proton momentum okay the probability that a quark a very very very high virtuality still carries all of its original energies very small, it's a bit like an electron that undergoes a sudden acceleration the probability that it retains all of its energies very small it will start radiating just because it's being accelerated and the larger the acceleration the more it will radiate and the smaller will be the energy mantase so this is what is described by this curve where we see that from up here we develop more and more to scan to smaller momentum the other thing it's marginal in this slide but I've chosen an evolution you see that it's quadratic we go from 3 to 10 from 10 to 100 so it's always the square and you see that the steps here from the solid to dash to dots are pretty much the same so as I grow quadratically in q the steps in the evolution are the same and that's because the evolution is logarithmic so this is C in here this is the C upcorks in other words these are the upcorks that come from the splitting of gluons when there is a gluon that splits into a q-q-bar pair u-u-bar this is what we call C so it will be to first approximation symmetric certainly there will be as many up as there will be u-bar and with a very similar distribution and you see there is no peak here at small at large x because these are quarks coming from the splitting of a gluon the gluon itself was coming from quarks having emitted gluons so it has a spectrum which is softer and then it has to split again into q-q-bar of course those quarks will carry a small momentum fraction so and they will all be sitting in a small momentum and that's why the distribution is like this and again the higher we go in q the higher the more quarks we get in fact these C distribution is very close to the gluon distribution the gluon is exactly the same story the gluon the smaller is x the larger is q the more gluons we will find so here we put together all of the different not all but several of the different flavours at the fixed scale one TV so this is a high scale typical of the physics we do at the LHC this is the solid one is gluons the dot dashed is the valence up quark so you see at large x it's more likely to find valence up quarks than anything else there are more likely objects we have as we go to small x it's the gluons by far which are the most abundant objects inside the proton and down here the dashed line is the Cu quark and it has a distribution which is very similar to the gluon you see there is a bit over a factor of 10 in rate again because a quark comes from a gluon splitting the gluon splitting is a process that goes like alpha divided by 6 pi we just saw it before it's like the gluon goes to cc bar so it's a bit smaller than a factor of 10 and indeed we see it here and the dotted line is the charm as we go to very high q square a q square which is much larger than the mass of the charm quark it doesn't really matter whether the quark has a mass of 0.3 gv or 0.3 gv because the scales we're dealing with are so large there is a logarithmic evolution and the log of 1000 divided by 3 or the log of 1000 divided by 1 are pretty much the same number and this is reflected in the fact that at very high q square we find almost as much charm as there is u any questions on these 4 plots contain pretty much at least 90% of the features and the subtleties and the phenomenology of part and distribution functions you look at these and you can find many of the relevant properties so I want to make sure that you get it okay now you may ask where do these curves come from they come of course by factorization theorem tells us that we calculate the curveable by convoluting a partonic cross-section which is something we can do from first principles with a pdf so we take a given process in a given experiment of a given accelerator we parameterize a pdf in a functional form it could be a sum of polynomials a sum of you name it a complete set of say polynomials we multiply by the hard matrix element which is fixed from first principles and we compare against the data and we tune and we fit the parameters of the pdf parameterization so as to reproduce the data and after that exercise we extract the pdf's at a given scale and then we use the D-Glapp evolution equation to go to any any scales of interest and we come back of course to the issue of how well do we know them because given that they come from the process of comparing a theoretical calculation against data the data has systematic uncertainties and statistical uncertainties the calculation will have some uncertainty because it's a perturbative calculation up to a given order so what is the size and the impact of uncertainties is something that we will discuss yes so if we go to larger and larger queue would you start seeing that other looks like the top for instance it's relevant in the process? so the question is whether going to queues even larger than those I included there we start seeing the heavier quarks the B the top perhaps and the answer is in principle yes absolutely for example 1000GV is bigger than the top mass so if I had pulled out the top from the numerical program that has the evolution I would define something which is different from zero on the other hand the top while for the charm it's enough to go to 3, 4 times the charm mass before the pdf approximation is correct for the top one really has to go to very very very very high queue so this idea that there is a the altarelli the diglap evolution with as a boundary condition the top quark density is equal to zero the threshold but from their own diglap works is completely wrong because at the threshold there are mass effects which are not controlled by diglap which are much more important than this simple boundary condition rule so in practice I really discourage everyone from using top pdf's even though you get a number different from zero but it's extremely unreliable until you get to energies which are at least 10 to 20 times the top mass so for the physics we do are colliders today and even tomorrow it's not really useful so any more questions? ok so a few words in the about 10-15 minutes I have left on Drellian processes are the simplest process in hadronic collision that we can imagine it's an electric weak process it's the annihilation of the quark-anti-quark pair into a Z boson or a W boson historically it was just an off-shell photon so by now we really mostly refer to the Z but certainly an off-shell photon is good as well now properties and goals of a measurement why do we consider Drellian because it's a very clean final state there is no hadrons you see once you create the Z or the W that go to lepton in the final state nothing will happen so we only have to understand what happens to the initial state it's very clean so experimentally it's very easy to well it's less hard to measure just because of its cleanness we can do precision tests of QCD the cross section for Drellian is the first process in quadronic collision that's ever been calculated to next to leading order precision so two loop level was calculated 25 years ago so to give you an idea the next process to be calculated with this precision to wait for another over 10 years before becoming known once we have W and Z bosons in the final state we do physics with them that's why it's interesting with the W we can measure the W mass the most precise measurement of the W mass in the world comes from the creation and study of W's produced in quadronic collisions at the Tevatron we can measure by looking at the asymmetries in the leptons from ZDK's we can measure for example the weak mixing parameter sine squared theta W not as precisely as yet but there is a potential to improve on these measurements now obviously the production cross section depends on how many quarks and anti-quarks and on their flavor and distributions in the initial state so this is an excellent probe of the up and down and also heavier quark densities and once we have these final states you know typically they manifest themselves with the peak Z goes to E plus E minus at 90 GV and then we have a whole tail of a distribution and at some point you know there could be another peak and that would be signaling the existence of some new physics so it's a very useful BSM probe as well the kinematics is very simple because it's a 2-2-1 process it's a 2-2-1 process with a definite mass let's forget about for the time being we don't need to be concerned with the width intrinsic width of the Z boson so the initial state quarks can carry any momentum it's 2-2-1 so at leading order the Z will be produced or the W will be produced with zero transverse momentum but of course it will have some longitudinal momentum okay and when I put in the initial state I have two degrees of freedom I have the momentum carried by the anti-quark the longitudinal momentum I have the constraint of the mass and I only have one free parameter out of these and therefore that free parameter will be the longitudinal momentum of the Z that will be free to move along the direction of the beam and these are the variables that we use the rapidity is the variable described in the longitudinal momentum it's the log of the energy plus the longitudinal momentum divided by E-PZ and there is some exercise that you can do here to connect the variables X1 and X2 which are the momentum of the initial state quarks to the rapidity and the mass of the final state and using these you can put it into the expression for the cross-section and what we find is that the total cross-section for producing a W in P-P collisions will be given by an amber where this AIJ includes the information on the weak coupling on the CKM, on the mixing between the quarks if we want to produce a W it will be UD bar or it will be CS bar so there is a Khabiboy angle for example and then there is, so this describes the hard process, it's just an amber it's a two-to-one process, there is no dynamics in that, it's fixed there is no angles, there is nothing and then there is this so-called partonic luminosity which is the convolution of the partonic densities of the two initial states integrated over all of the possible momenta with the constraint of course where the product of the two momenta gives rise to the mass square of the Z boson so this partonic luminosity just gives the flux of QQ bar pairs which are capable, which are in the right kinematical configuration to produce a W so the equation for the luminosity was given in the previous slide if we now make an assumption of the behavior of the partonic density we let it go like one over X now here we can also do the calculation with a more general form so I put as an exponent plus delta with a small delta so this is what we did before with the chart was to use one over X here I'm just being a bit more general because the calculation is simple anyway, so if we put in this parameterization for the partonic density and we calculate the luminosity, the convolution of the two what we get is an number which is the log times power contribution, tau here is the ratio between the W mass and the energy in the beams so when we put this back into the expression for the cross-section we get that the cross-section grows like there is a constant and grows like the logarithm of S divided by the W mass squared with possibly a power correction related to this delta so that says that if we take the W cross-section and we go to higher and higher masses this cross-section becomes bigger and bigger so we see minus typically if we are talking about the production of something or the Z-volume or the next channel object the cross-section goes like one divided by the mass the mass squared even out of dimensional analysis that is the typical behavior you would expect here all of the dimensions are in fact in the one over MW squared that is in front that is in this sigma zero and then there is a solution in S that makes cross-sections grow and cross-sections grow because as we go to higher energy the momentum necessary to produce a fixed mass object becomes smaller and smaller the momentum fraction becomes smaller and smaller and when X becomes smaller the PDF becomes bigger and therefore the luminosity becomes bigger and therefore the cross-sections become bigger ok question yes the question is whether there is a dependence because this is growing arbitrarily large as X goes to infinity yes in this expression there is a unitarity problem it's related to the fact that the part on density cannot go like one over X to the one plus delta with delta positive to arbitrarily low X at some point one gets into this regime which is called of saturation it's so large the density that then gluons and quarks start talking to each other so instead of creating more there are so many that we start fusing back together and therefore it's the whole alter the diglap evolution has to change ok so indeed at some point this will not be accurate any longer but we have to go to energies which are at least two or three orders of magnitude higher than what we have today before we get into the regime for W production right so it's not but good catch good catch you're right so these are some examples so we're talking about these parton luminosities which are the object that multiplies the partonic cross sections so it's crucial to be able to evaluate cross sections I said before we get the partons the PDFs from a comparison with data and therefore there will be uncertainties many people do this exercise they use for example the different functional forms to fit the PDFs they get results which in principle should be compatible and then one of the big things is to compare the results of different people and this is the situation back say at the very beginning of LHC operations these are luminosities for the QQ bar initial state and for the glue-glue initial state there are two say boxes for each because you see all of these names G, J, R ABKM, HERA they correspond to different groups different people doing these fits and there are different groups here and different groups there so if you look at the glue-glue for example these are luminosities as a function of tau tau into a mass here we are talking about 140 LHC at 7 TV this is 120 GV which is the Higgs mass this is 240 GV this is TT bar 350 GV the TT bar threshold so the different bands correspond to the uncertainty that each group obtains there is a central value plus or minus something so we see that for example we take the red band here and the blue band here there intrinsic uncertainty is comparable however they are sitting a sigma and a half from each other okay so if one wants to have a fair estimate of the overall worldwide uncertainty one should just take the envelope of everything and at the mass of the order of the Higgs mass we see that we have a plus or minus 5 or 6 percent uncertainty if we take other sets for example this Hera PDF ABKM, GRJ look at what happens when we are talking about energies in the range of a TV for the glue one there is a hugely you see this group predicts there this predicts here this group predicts there so there is almost 100 percent uncertainty so where do these uncertainties come from where do these differences come from they come from the fact that different groups for occasionally well justified typically well justified reasons select different sets as inputs for the fit of the PDFs in principle one could put in all of the measurements that have ever been done in any hydronic collider electron proton collider or neutrino beams different sets of data have different theoretical systematics for example because the theory is perhaps less known if one looks at fixed target experiments on heavy nuclei there are nuclear corrections the PDF in the proton is not exactly the same as the PDF in the nucleus so people have different confidence in which sets they use and as a result of these they come up with slightly different results turns in term of the uncertainties on the Higgs production for example this is the uncertainty at next to next to reading order coming from PDFs and you see the Higgs at 125 gd there is a plus or minus here perhaps 7 or 8 percent uncertainty this is again in 2011 now several years after making use also of the data coming from the LHC all of the groups doing PDF feeds having incorporated information that came from the LHC have now new sets and if you look at that plot which is the glue on glue on luminosity as of a couple of months ago and in the region of 100 to 200 gd where the Higgs is as you see these three groups were quite different are now fully consistent with each other and the consistency is at the level of these are cross sections for the Higgs at the level of plus or minus 2 percent ok so the numbers coincide with the plus or minus 2 percent but a 13 TV which is where LHC is now taking data we have again plus or minus 2 percent so we went down from a plus or minus 6, 7, 8 percent down to plus or minus 2 percent by just making use of all of the information coming from the LHC and how these information feeds directly into the PDFs is one of the things that I will discuss in the third lecture where we will review the phenomenology, the data and the interpretation of the data for today I stop here so that we still have some time for discussion