 So I was going a little slowly this morning since it was the first lecture, so I will speed up if you do not mind because there is a fair bit of material to get through and I have promised the subsequent lecturers that all the background will be covered in these. So basically what I want to explain in this lecture is when the publicity departments at CERN or Fermilab or Particle Data Group want to create a nice picture of how we trace back our history all the way to the big bang and you have seen this kind of pictures very impressive how do they actually do it how do they know all these numbers who has given these to them they are all correct by the way okay. I am not sure about the colors but the numbers are correct okay so what we learn today is how to make this poster right. So first this part I have already done this morning so we ended by saying that the universe really has no large quantum numbers of any kind is only a little smattering of baryons and leptons no electric charge that we can see and the chemical potentials are all therefore very very small okay. So now we are going to discuss the thermodynamics therefore of an ideal gas and with a little excursion into field theory not very much because we want to keep things simple. So for an ideal gas you know that the distribution function is given by Fermi Dirac or Bose Einstein depending on whether it's Fermion or Boson for a gas in equilibrium so that of course begs the question what is equilibrium and it will turn out that equilibrium is slightly subtle there is chemical equilibrium and kinetic equilibrium which are two distinct things and we have to be careful what you're talking about when you say something is in equilibrium but for the moment let us say that when you go to the early universe the density is high enough and the rate of interactions therefore is high enough that particles can get into equilibrium we'll quantify this later right but if you are in equilibrium then of course we can just pick up the thermodynamics textbook and when you can read off that the number density of particles will be the number of degrees of freedom for example for the photon this will be 2 left-handed and right-handed there is no longitudinal state because it's massless right for a massive boson like z boson it's it's been one it's three states and that is therefore the number of degrees of freedom and then you integrate over the distribution function over phase space and you get that for number of massless particles this thing goes as T cube right just as you know for photons in particular that coefficient there will be 2 zeta 3 by pi square right how do you know that because you can actually work out these integrals that I've written out there in terms of zeta functions that intake general integral there is this quantity and they are all given in terms of zeta functions similarly for the energy density you integrate the distribution function multiplied by the energy per particle integrated over phase space and you get T to the 4 right some numerical factor always the number of degrees of freedom and also pressure as I remarked yesterday sorry this morning pressure of course also gravitates so you have to keep track of that and that is the q square over 3 that's the kinetic energy times the phase space etc integrated that gives you this so this is for relativistic particles you have non relativistic particles then the distribution function goes from T cube to T cube times e to the T to the 3 half times e to the minus m over T okay it's done by a so-called Boltzmann factor the number of particles in equilibrium is a very steeply dropping function of the temperature it doesn't matter if you are boson or fermion now this question of equilibrium now needs to be quantified kinetic equilibrium means the temperature of a particle is the same as the temperature of photons so to keep life simple let us say that the photons are always in equilibrium in my next lecture I'll quantify how we actually determine when the photons are in equilibrium how a chemical potential can actually develop for photons this is going back to a question that was asked this morning but we are starting by saying photons are a plank distribution as observed there's no chemical potential anything that annihilates into photons the chemical potentials are narratively conserved the basic criterion for equilibrium is that the scattering rate of the particle should exceed the Hubble rate why is that because the Hubble rate sets the time scale for any process right within a Hubble time if you like the universe doubles its size within a the length the inverse of this is like the size of the universe so the mean free path should be shorter than the size of the universe you can think of it as you prefer but the basic criterion is that the scattering rate should be greater than the Hubble rate the Hubble rate goes as square root of 8 pi 0 by 3 in the early universe the curvature term is unimportant lambda is unimportant so we just look at this and rho is going as t to the 4 for a radiation dominated plasma so the Hubble rate is going as t square times square root of 8 pi g which we often write as m the Planck mass Planck mass is 10 to the 19 g e right so basically the Hubble rate goes as t square over m p right times the number of degrees of freedom contributing to this row we'll look up to that whereas the scattering rate goes as the number density times the cross section averaged over the velocity right so that is the rule of thumb in detail of course you have to solve a transport equation but the thing is to get a feel for what kind of time scales and numbers are involved before we start handling the you know Boltzmann equation which is in general quite complicated so the decoupling will happen when these two rates are equal and I'll give you an example shortly to make it clear that this is all very straightforward so at this point if the particle is relativistic if its mass is less than this decoupling temperature at that point then it would also have been in chemical equilibrium because it would have been annihilating into particles which have no chemical potential and its abundance will be just given by the equilibrium value that we have already defined in the last slide and this will be we can write it in terms of the number density of photons times some degrees of freedom times some statistical factor according to whether it's a boson or a fermion so yeah yes the same issue will come up in any process in the early universe you always have to compare the rate of whatever physics you are interested in whether barogenesis or neutrino decoupling or anything you have to compare it to the Hubble rate the Hubble rate sets the clock if you can't do it what you have to do within a Hubble time you are out of luck okay you have to get on so as you go into the early universe the Hubble rate is increasing you have less and less time to do your business okay so this is why there is always a cutoff to any process set by the Hubble rate in fact an example I'll give you towards the end will be a calculation of how hot the universe could have ever been and there'll be a surprise there for you okay so this decoupled particles now do not scatter anymore because their rate is smaller than the Hubble rate so their mean free path is longer than the size of the universe or the time between scattering is longer than the age of the universe think of it as you prefer right but the number in a co-moving volume is therefore conserved and therefore their distribution function will be invariant under the expansion the skill factor because they are rate non-radivistic and their phase space distribution keeps the equilibrium form right this is true only as long as the particles remain relativistic so if you are thinking ahead in your head what about something like neutrinos which we know have a little mass they were relativistic at the time when the decoupled but they're no longer necessarily relativistic now well the answer is that the distribution function of niche knows will become horrendously complicated it will not be Fermi Dirac anymore a lot of people don't realize this they so there's a nice paper by Jeremy Bernstein and Feinberg from 1982 or something which discusses this in detail now subsequently this temperature ti will continue to follow the photon temperature but if something happens that changes the photon temperature and the temperature of all coupled particles it will not affect the temperature of the decoupled particles so in other words this decoupled particles are like inhabiting a separate universe okay it's evolving along with ours but it has no direct contact with ours so think of it as some kind of a parallel universe the temperature in that universe will not change you know whatever happens in our universe this means that for example if something does happen let's say electrons and positrons annihilate it will heat up the photons but it don't heat up the decoupled particles for example neutrinos so that their temperature will drop below the photon temperature and correspondingly the number density of those particles will decrease right so these are just words I'll give you an example later but keep in mind always that we should focus on what we can actually measure right I can talk about the temperature of some fiducian decoupled species but I can't actually measure it I can only measure the photon temperature so everything has to be done with reference to photons in fact photons turn out to have the very convenient that they dominate the universe because their temperature is easy to measure it's a scalar and it turns out to be a very good black body so everything is very neat otherwise you'd have been in trouble now how do you actually calculate this so I'm going to go back to a paper from 1953 this paper is really one of the best papers ever written in cosmology it's not very well known but I would strongly recommend looking it up if you want to see how the whole system was founded so what they said is let's break up the pressure into the pressure of interacting species and decoupled species and similarly for the energy density now the energy conservation equation tells you how the change in the temperature as the universe expands the universe cools down how that will cause work to be done on the system so this is the equivalence of the energy conservation equation that I wrote down earlier today I can then rewrite this using this decoupled and interacting species separately and using the fact that the number of decoupled particles is conserved in an in a co-moving volume so the rate of change of this with temperature is 0 so therefore I have d log a by d log t in other words 1 by a dot d a by so t over a d a by d t that goes as 1 third of d rho i by d log t okay this follows from this equation here if I do that then I can suggestively combine it with the second law of thermodynamics to write down an adiabat an adiabat is a connection between the scale factor and the temperature okay the scale factor is not observable okay never we always have to have some proxy for it so in the morning we talked about the scale factor being a proxy for redshift redshift is observable you measure a spectral line shift and that tells you how much the scale factor has changed not the absolute scale factor but only its ratio to something and similarly temperature is another sorry about that temperature is an observable and what we therefore want to do is to relate the scale factor to the temperature I'm not sure why it's doing that if it's doing it I mean it's nothing to do with this sorry just one sorry one second speak loud enough for them to hear so if this thing is constant then what we're like is that A times C is constant and that is called an adiabatic invariant in other words if the energy density thank you if the energy density plus the temp plus the pressure divided by t to the 4 is constant then I have a relation between the scale factor and the temperatures okay so we have to put an adiabat and that adiabat allows us to now have a proxy for the scale factor in the temperature so now we can try to relate epochs for the number of interacting species is different okay in other words when that quantity that I assumed is conserved may not be conserved so how do I do that I have to construct something that remains invariant in a co-moving volume if I can do that then I can relate the temperature of species which decoupled at a certain epoch to the photon temperature at a different epoch now that is the specific entropy and the specific entropy capital I refers to the sum total of all the individual species the individual species is the same integral that you saw earlier except that now I'm written it for writing it for pressure plus energy density okay and I integrate over phase space as I did before so basically I can parametrize all these guys in terms of the corresponding amount for photons for photons I know the specific entropy I know the energy density I know the number density they go respectively as t cube t to the 4 and t cube for the number density and the number of corresponding values for any other species which is relativistic is just given in terms of the photon one times some number of degrees of freedom right and there are the same integrals will be involved depending on its fermions of bosons these are just details don't worry about that it's in the nodes this is just to give you the idea that we know what we are doing okay we are keeping track of everything as carefully as we can so the number of interacting degrees of freedom will be just the sum of the individual species in terms of the total amount of entropy and similarly I can define something which is the total number of degrees of freedom which contributes to the energy density so the point that is being made here is that there are species which may have decoupled which do not contribute to the number of interacting degrees of freedom but they do contribute to the total energy density so for example neutrinos which we mentioned earlier they might have decoupled they don't interact any longer with the universe but they're still around they're still gravitating they contribute to the energy density right so we have to keep track of then two kinds of degrees of freedom those that contribute to the entropy density and those that contribute to the total energy density entropy is like a shorthand for saying they interact so now we can do this calculation of what happens to the temperature of some particle i which decouples at some temperature td and later something happens to the photons how does it relate to that so for temperature which is less than the decoupling temperature the entropy in the decoupled particles and the entropy in the photons including other interacting particles they are separately conserved right that these particles have decoupled they're expanding with the expansion of the universe they're not interacting anymore but they still of course have you know entropy content within themselves and that is basically given by these two expressions here where i'm just keeping track of the number of decoupled particles and interacting particles so since the temperatures of the two things were the same at the point with the decoupled then i can simply work out by solving these equations how the temperature of the decoupled species changes possibly with respect to that of the photons at subsequent times it will be simply the ratio of the number of interacting degrees of freedom at the time of decoupling and the number of degrees of freedom at a later temperature so if i keep track of the number of degrees of freedom i can do my bookkeeping okay so all this is discussed in a nice paper by paulo gondolou and grassyela germini in somewhere in the 90s so now the number of degrees of freedom specifying the total conserved total entropy will have to keep track of that reduction of the temperature for the decoupled species with respect to the temperature of the interacting species which is the same as the temperature of photons because by definition they're in equilibrium with the photons right so i'm sorry i'm going through all this rather quickly but it's very simple algebra really it just there's too many symbols and subscripts here because i'm trying to distinguish between different species and interacting and all that so the bottom line is that we have done what we needed to do or rather alpha and fallen and herman did it they constructed a quantity which is conserved in a co-moving volume regardless of whether the number of species is uh changing or not you know when the temperature drops below some mass threshold those particles will annihilate the entropy content in those particles will now be transferred into radiation and into other interacting particles right but not into the non-interacting particles so you have to keep track of that in the standard model this is only relevant for neutrinos but if you are trying to construct cosmologies of new theories of physics beyond the standard model they're often are non-interacting particles and you have to track their thermal history carefully so the ratio of the decoupled particle density to the photon density is just related to its value as decoupling in terms of the ratio of the number of degrees of freedom so basically the number of photons in a co-moving volume might jump by a certain number and the number of decoupled particles reduces in ratio to the number of photons by exactly the same factor okay and so let's bear with me for two more equations and I'll give you an examples to make all this clear the total energy density I can similarly parameterize in terms of the energy density of bosons and fermions but now I have to take into account that their temperatures may not be the photon temperature their temperatures might be reduced from the photon temperature so this factor allows me to keep track of the fact that they might be decoupled species around which are a little colder than the photons and whose energy density is correspondingly slightly smaller right so the bottom line is that I now have a variant of the adiabat that I gave you before the adiabat I gave you before a times t equals constant works as long as the number of degrees of freedom is the same nothing is changing all particles are in equilibrium what if the number of interacting species is changing then basically that adiabat is altered by this little factor here so d log a goes as d log t but minus this d log gs okay so we are doing all this because we want to be able to tell the time now all these things about thermodynamics are not in terms of time it's in terms of temperature so I have to relate the temperature to the time okay because the Hubble parameter is a time derivative so time is not an observable at least I mean I can always cook up some then instead of co-moving observers carrying clocks but in practice I cannot go and ask somebody what's the time right but I can measure a temperature so I have to relate this to the temperature because the energy density is defined in terms of a temperature so if I integrate that equation therefore I have to convert time to temperature and using that expression I just gave you which relates the scale factor to the temperature and the number of degrees of interacting degrees of freedom I can now integrate this right so now I have written root 8 pi g rho as 1 over mp square like Planck mass this is for you know it's just easier to calculate in energy units if you do that and therefore the time is the integral of that g rho is the total number of degrees of freedom contributing to the total energy density right and then I have this factor which follows from the fact that I have to translate between the scale factor and the temperature and that is going to give me a factor which treats track of whether the number of interacting degrees of freedom has changed but this is an exact expression what we do is we look at this expression and we say let us look only at when this thing is more or less constant and this is 0 and then I can integrate it very simply and I get that the time is about you know 1 over square of the temperature numerically the time is 1 second when the temperature is 1 MeV of that order there is also a 2.4 divided by square root of g rho which is roughly 1 right so this is very easy now if you ask me what was the temperature and time relationship in the early universe I will tell you that it was 1 second at MeV if I go to a GeV then that is GeV by MeV is 10 to the 3 square is 10 to the 6 so the time then was 10 to the minus 6 seconds okay if I go to a 100 GeV then that time at that time at that point is 100 GeV over MeV which is 10 to the 5 inverse square 10 to the minus 10 seconds so now we know how those people in the drawing office at CERN and Fermilab make those charts that is how it will tell the time you know the quark entrant phase transition occurred about a microsecond after the big bang the electric phase transition occurred at about 10 to the minus 10 seconds actually 10 to the minus 11 because you have to keep track of that so we can basically work out all these things and in particular if you tell me that there is some new physics you know supersymmetry extra dimensions whatever I can work out the thermal history for that as well right so this is just the classic example that is given of all this stuff I have been talking about in the context of the standard model but the only particles that decouple while remaining relativistic are neutrinos so their cross section for neutrino scattering basically goes as gf square e square they are relativistic so I can trade e for t and the number density goes as t cube so therefore the total rate is going as t cube times t square so t to the 5 okay now this has to be compared with the expansion rate which as I told you is going as t square over the Planck scale right h is going as t square by mp so therefore you can see that if I equate these two then basically I have one factor of t which is sorry t cube t square and this t to the 5 cancel to give t cube so t is therefore gf square Planck mass to the minus one third so that is a some combination of the weak scale coupling and the gravitational coupling that gives you coincidentally some energy scale which is characteristic of nuclei okay this will turn out to be very interesting when you study nuclear synthesis so at this time the number of neutrinos is just 3 quart of the number of photons because the temperatures are the same they have still not decoupled and the number of neutrinos the the degrees of freedom is 2 right there are left handed neutrinos and right handed anti-neutrinos there are no right handed neutrinos or left handed anti-neutrinos right because the well that's the v minus a structure of the weak interactions of course if they were direct if neutrinos did have a direct mass the right handed states are not interacting at least they don't interact through the usual w but you could populate them if for example neutrinos at a magnetic moment so you know you'll find papers in literature about how you can populate those states or just from scattering whenever particles with mass scatter there is a probability of m nu square over the temperature of populating the wrong helicity so there are many studies of what happens to massive neutrinos which discuss those issues but for us it's very simple we have just two degrees of freedom and then when as the temperature drops below half a mev this is one mev so just a little bit below electrons and positrons will annihilate they'll annihilate almost totally as we'll see later because they have electromagnetic coupling and that's enough to basically wipe each other out nothing is left almost nothing is left when they do that they dump all their energy content into the photons but the neutrinos don't know because they have already decoupled so the number of degrees of freedom is changing from photons plus e plus e minus that is total of 11 by 2 taking account of that factor for fermions and bosons to just two just photons so therefore the ratio of this is 4 over 11 and that is going to be the ratio of the number density of the neutrinos sorry the photons to the number density of the photons below the phase transition to that above and that is therefore the same ratio for the number of neutrinos to the number of photons it goes as 4 over 11 right and in fact the net ratio is 3 over 11 because there is a 3 fourth factor there because of fermions versus bosons and so you see I can calculate the number of degrees of freedom characterizing the entropy density below the mass of the electron I still have photons but the neutrinos now have a lower temperature and this is 4 over 11 to the one third so to the cube that gives me a factor of 4 over 11 so I get some number like that and similarly for the energy density so you can see that with some confidence I can work out the thermal history of the universe based on the particles that I know about so all I now have to do now that I have got the prescription is I have to ask you what is the matter content of the universe you tell me according to your favorite Lagrangian I will tell you the thermal history right basically I just count all the number of adiabatic degrees of freedom and then I keep track of when they annihilate and dump their energy into photons and of course since I know that you know field theory is not an ideal gas I have to keep track of phase transitions so before you knew very much about phase transitions this is the kind of diagram used to draw which shows the number of degrees of freedom as a function of the temperature at high temperatures the number of degrees of freedom characterizing the energy density and the entropy density are the same in the standard model they only deviate from each other below the electron mass for the reason I have given you there is a big jump but the quark hidden phase transition because suddenly you have liberated all the protons so and there are three colors so there is a lot of quarks and gluons around which later at lower temperature would be all confined into bound states so that's a very big jump that's the biggest jump that happened biggest thing that happens then you have smaller things to do it's a mu annihilation and stuff like that at higher temperatures you'll have annihilation of b quarks top quarks you have the electric phase transition and so forth we didn't really know how this transition occurs so this is in some MIT bag model or something and this is the kind of favored value now but you of course have now a precise values from the lattice so this is a table of how the number of degrees of freedom changes according to the temperature and all the various mass thresholds are shown so what I want what is highlighted here is that there is a big jump here in the number of degrees of freedom at about 150 MeV because of the confinement of free quarks and gluons at that point and if you go up well nothing much happens in the electric phase transition in fact nothing happens at the electric phase transition because the as you know the the the longitudinal mode of the W's and Z's is precisely what comes from the Higgs it's the Higgs effect Higgs mechanism so the number of degrees of freedom does not change the W's and Z's were massless in the earlier universe they become massive they just take on those two degrees of freedom right so actually weak interactions in principle become long range okay in the early universe when the electric symmetry is restored actually it would be long range if it was not for the fact that you're in a thermal plasma so there is a screening length but there is no intrinsic limit to the range of the W's and Z's so start thinking about what would that mean what if weak interactions were long range what would happen in the early universe these are all things too worth thinking about the quark hydrogen thing just becomes an ideal gas okay and a very dilute gas too so there's not much going on there and in fact even the electric phase transition according to the our lattice friends is actually very very dull if it was the case that the Higgs was very very light then you would have a strongly first order phase transition however for any Higgs mass more than 50 GeV it's very boring there is no phase transition at all it's what they call a crossover and effectively nothing happens there is no entropy release the number of degrees of freedom doesn't change nothing happens at the electric phase transition right of course the precise effect of electric symmetry breaking might still have cosmological consequences so for example if you're thinking of the freeze out of a heavy particle which we'll discuss later if that particle gets some of its mass from the Higgs mechanism then it may not have got its entire mass by the time that freeze out happens okay these are things one needs to think about the zero temperature mass of a particle today is not the same as its mass in the early universe it depends on where it's getting its mass from is it getting it from super symmetry breaking is it getting it from the Higgs one is to keep track of all these things yes sorry yeah no i can hear what kind of observables allows us to track these this history yeah just the counting argument i gave you earlier that is to say i start from what i know i have photons and three species of but it's just a theoretical or theoretical yes it measures the entropy not but we know that photons exist we know that they have a left-handed polarized state and a right-handed polarized state so it is experimental but i'm relying on the last 200 years of experiment i didn't do it especially for this okay and similarly neutrinos if you bring in some new physics then of course you'll have to include them in this table these are all the things that we know simply from the fact that these are well-known fermions with the unknown properties right so yes i mean it is as reliable as our understanding of basic fermi boson statistics okay there's no no ambiguity there now here is something interesting which i hope will surprise you it does surprise many of my colleagues so you could ask you talk about a hot early universe what is the highest temperature that the universe could have had by temperature i mean you know state of thermal equilibrium obviously so let us do the exercise we are to always equate a scattering rate to a expansion rate we know the expansion rate we have already discussed that h is going as t square over the plunk scale how does the scattering rate go well at sufficiently high temperatures when you know everything is relativistic it'll basically go as some coupling square and the only dimensional way to construct a cross section is to say it's alpha square by t square that is the only dimension full quantity so the number density is going as t cube this is going up alpha square by t square so this is going as alpha square times t and this immediately should let set the warm warning bells ringing because the Hubble parameter is going as t square this is going as t clearly this will start falling behind the Hubble rate at some point in fact when you do the exercise precisely you see that this thing is going as t this thing is going as t square so there is a crossing point and that crossing point temperature is about 10 to the minus 4 of the plunk scale okay if I take some unified value for this coupling and the number of degrees of freedom would be about 200 so I'm sorry I'm using g star rather G row this is more conventionally used so if I take this seriously then what that means is that the universe never even got as hot as the gut scale doesn't matter what you do the universe was expanding so fast if it was a state of thermal equilibrium that it would have cooled down it could not have got into equilibrium at the gut scale this is interesting because in the old days people used to talk about barogenesis at the gut scale heavy bosons gut scale bosons being in equilibrium and then falling out of equilibrium decaying doing all kinds of things that could never have happened and the answer is of course when you look back at the literature this estimate was made but people sort of didn't put in all the factors properly so they got some number like 10 to the 15 jev rather than 10 to the 14 and of course at that time the gut scale was 10 to the 15 this is 1980s right this is pre Suzy and so basically it looked like it was all okay but actually it is not okay now there is clear water between the maximum temperature that you can heat up to and the gut scale and in fact these gentlemen have actually worked out the precise value taking into account that the coupling is also temperature dependent and if you do that then you get 3 10 to the 14 jev as the highest temperature to is the universe could have ever heated up so this has a implications also for other things for example for topological defects so there is this so-called kibble mechanism for the generation of topological defects in fact in this very auditorium I heard Tom kibble talk about it in the 80s which is the idea that as the universe cools down you know below the some scale where you have a big group breaking to you one and you would create if the homotovic group pi three pi one is non-trivial you would generate these topological defects and you'd make one per horizon etc none of that is going to happen you you could still make monopoles by other means but not by the kibble mechanism so this is just to illustrate to you that in a very well covered well traveled territory there are sometimes surprises you should start doing little calculations to see if all that works okay so at this point I therefore have set the stage for discussing big bang nucleosynthesis which is happening over there at about a second after the big bang made famous by Weinberg's book first three minutes it actually goes on for about half an hour so big bang nucleosynthesis is seeking to address the question where do all the elements in the universe come from this is the distribution of all the elements in the universe as a function of atomic number and you have no doubt seen it in nuclear physics courses where people tell you that you know say iron peak this is because this is the most stable nucleus here helium is the most stable nucleus there there are many other elements there are some things like lithium beryllium boron which are very very low down in the industrial and medium because these things are not very stable nuclei so this is like a inverted plot of the binding energy of the nuclei now understanding that we have today is that all these guys can be made in stars in the last second of a stellar explosion in a type 2 supernova the huge flux of neutrons will undergo rapid neutron capture along the value of stability and below it and create all these elements okay and this was written down in a famous paper by the two verbiages Fowler and Hoyle back in 1957 but what they realized was that they could not make helium by this process because all those stars make helium the sun is making helium but helium is about 10 percent by number 25 percent by mass and every time you make helium you release gamma rays right of a few mv energy so if you took the present day universe and tried to convert 25 percent of it to helium by mass you generate a gamma ray background which would be enough to wipe us all out so you cannot make helium in stars right the only way you can do it is to make that process sufficiently far back in the past that the gamma rays are redshifted and that is exactly what we see today as the microwave background radiation okay so helium and hydrogen are the residuals of the big bang with the little trace of lithium 7 but we don't see anything of the other heavy elements now this is often credited to George Gamow and since many of you are graduate students I thought I'd put in a little anecdote about supervisors and graduate students so actually Gamow in fact wrote this famous paper in nature this is from 48 Dr. George Gamow and he goes on about how he can work out the the synthesis of the elements in the first moments of the universe he also quoted in there a value for the temperature of the relic radiation which was actually wrong it was six degrees Kelvin much higher than the upper bound that had then been placed by an engineer called ohm and the Bell laboratories where later it was subsequently discovered ohm had put a upper bound of 2 plus minus 1 degree and Zellovich and his students saw that and decided that the universe could not have started from a hot big bang because somebody had put a upper bound on it below the limit below the number estimated by Gamow but the truth is that Gamow was a brilliant guy he had great ideas but he couldn't calculate too well he left it to his graduate students okay in particular this chap Alfer he was the guy who in fact did most of the calculations you saw him earlier and he also worked with Herman who must be the more distinguished looking chap here so Gamow was talking about a element called Yelem which was supposed to be the you know it's it's it's his own made-up word for the primordial matter from which everything was created so what's interesting is that they published a paper in 1948 in which they spelled all this out but there was a prior paper which Gamow had published just a little bit earlier with alpha better and Gamow and the funny thing is that better had nothing to do with it whatsoever he was roped in just to get a fun title for the paper for the authors and they left out poor Herman because he refused to change his name to Delta so the real credit I think for the hot big bang and primordial liquid synthesis should go to Mr Ralph Alfer who was at the time a graduate student and this was in fact the paper that I already quoted to you earlier which told us how to calculate the temperature of the coupled species in terms of the still interacting species 1953 but this is a very good paper some papers are worth reading in cosmology unfortunately not all of them right but their realization followed a very crucial point that Hayashi Japanese physicist had made that neutrons and protons could be in chemical equilibrium in the early universe that's what you're now going to discuss not kinetic but chemical equilibrium with an example and just as an aside alpha was finally awarded a medal although I think for doing something as impressive as that for the prediction that the expansion leaves behind background radiation he was the first to calculate that and for providing the model for the big bang theory you know you would have thought he deserved well I'm sure this is impressive medal but you know I would have given him something bigger than that okay but at least he was rewarded before he died now weak interactions so this as I said goes back to the work of Hayashi alpha fallen and Herman and then many distinguished cosmologies Jim Peebles Bob Wagner at Stanford people said Princeton and Ralph Fowler and Fred Hoyle who was at Cambridge they kind of put this together this is very important it's nuclear physics it's very old but it's very important because the foundation of modern cosmology rests on all this and even today this is the most reliable part of cosmology that we can talk about so the dramatist personae as I've told you before are the photons the neutrinos and electrons and positrons at around MEV neutrons and protons are going to be in weak equilibrium because they can change into each other through the weak interactions right so the number of neutrons to the number of protons is going to be just given by a statistical factor chemical equilibrium because they are undergoing a chemical reaction if you think of it as a chemical reaction right and of course the reverse processes are proceeding at exactly the same rate that is the nature of detail balance now the one thing that we have inherited from the early universe at this point is a ratio of baryons to photons at some tiny number it's like 10 to the minus 10 okay why that is so we don't know the point is that all the baryons have already annihilated each other and turned into radiation but one baryon has been left over for every 10 to the nine pairs why that is so wilfrid bookmuller will tell you about tomorrow right right wilfrid okay but i'm just going to take it as an initial condition so what is the weak rate the weak rate is precisely what we computed earlier for the freeze out of neutrinos it's basically going as gf square times e square which is gf square times t square times tq for the number density so it's going as gf time t to the five and the expansion rate is going as t square by the plunk skin so if i equate those two now i get that the freeze out temperature goes as gn over gf square to the one third just as i got for neutrinos and that's about a mev okay that's the temperature one mev the time is one second we have calculated that already right so from a state of weak equilibrium we have got to the point where neutrons frozen as a freezing out what hashi had pointed out was something very important once you reach a state of thermal equilibrium you do not have an arrow of time you erase the memory of the past so you can do a meaningful calculation at one mev in the early universe without knowing anything about what happened earlier what happened at the plunk scale you don't care about it what happened at the gut scale electric scale none of that matters you have got into equilibrium there is no arrow of time therefore this is a very sound calculation precisely because of that so although the universe is expanding and evolving to a very good approximation at each point we can consider it to be in a state of quasi-equilibrium and that makes our job calculating things very very simple because the full non-equilibrium problem would be horrendous I mean nobody would attempt it so now that I made neutrons and protons their number ratio will be of order one because the temperature the mass difference between neutrons and protons is also of order one mev okay this is very curious there is no reason why the neutron proton mass difference has to be of the same order in order to give me a reasonable neutron fraction which is neither one nor zero you can see it's exponentially sensitive and what we'll see later in the last lecture is that this is interesting because the freeze out temperature is determined by the gravitational coupling versus the weak coupling the neutron proton mass difference is determined by the strong interactions and the electromagnetic interactions so all four fundamental interactions are involved in determining that number to which it's exponentially sensitive right so now you can see that even this humble process can allow us to put a really tight constraint on variations of fundamental couplings in the early universe any change from what we measured today in the laboratory and so on so there is however a little problem even after neutrons and protons have gone out of equilibrium they will try to combine to make elements and the first element they'll try to make is deuterium this is the weakest bound element so that photon is 2.2 mev okay of course if I had a 2.2 mev gamma around that can break up deuterium again and give me back neutrons and protons right now you might think that nucleosynthesis therefore should start as soon as the temperature drops below 2.2 mev right but in fact it doesn't you have to wait a lot longer why is that that's because even when the temperature drops below 2.2 mev the average temperature so if you have a blackbody photon distribution like that the average temperature of this photons right is not exactly it's 2.7 times t if you do the integral over the for the average energy so if this is 2.2 mev the average temperature is what whatever it is 6 mev right so I have to wait for the temperature to draw further but even if the temperature has let's say dropped to 0.1 mev what about this winged tail here I can so even if the average temperature here is say 0.1 mev I can go to 2.2 mev in the tail right this is a exponential distribution this is going as e to the h mu by t right but even though the number of photons is dropping exponentially there are so many of the photons relative to the nuclei that I can afford to take a hit by going deep into the winged tail before the number of photons become so small that I can't find one of them to break up my duty role right how far do I have to go well if I go to 10 times temperature e to the minus 10 is 10 to the minus 3 I have to go to e to the minus 30 because then that's 10 to the minus 9 that is roughly the ratio of barions to photons so basically the temperature at its nucleus in this is starts will be given by this 2.2 mev divided by the log of the barion to photon ratio that is simply reflecting the fact that even when the photons here are too cold to do this reverse process there are photons in the tail which still can do it and so I have to drop a lot in temperature this is the same reason when when the universe combines becomes neutral it doesn't happen at 13.6 electron volts it happens at about 30 times smaller temperature because it's basically a photon gas well once you start which is at 0.07 mev if you do here if I put in eta which I'm putting here you can see this is happening much later at that time the time is three minutes this is the title of Weinberg's book and meanwhile of course the neutrons have been beta decaying they've gone down to 1.7 because neutron lifetime is about 10 minutes and then basically all the neutrons get bound in helium because helium is the most stable nucleus around and therefore the abundance of helium by mass is just twice the neutron to proton fraction because two neutrons make up one helium nucleus with two protons and then there's a little leftover trace of all these things here right but you don't make any heavy nuclei because it's a very dilute system the density of the universe at the time of nucleosynthesis is the density of the air in this room okay you didn't know that did you it's extremely dilute system which is why we can do this simple calculation there are no many body effects there is no fancy field theoretical calculate complications it's a dilute gas it's even less more dilute than the sun so a it is dilute so you don't have to worry about three body processes p it is expanding very fast so you have to do everything within one second okay and that is why you don't make anything else you just make hydrogen you just make helium the hydrogen I've taken for granted okay and this is the reaction network which shows you basically that everything is happening around here okay there's a little leakage through to these things but there are no stable nuclei with a of five or a of eight so it doesn't actually propagate through except in exotic cosmologies if you have an exotic cosmology where there are I don't know iso curvature perturbations or something like that you could make heavy elements so then the observational absence of heavy elements allows you to put constraints on those possibilities right so this was all worked out that Bob Wagner wrote a nice code in the late 60s and then more recently this has become a precision tool because due on diapers and collaborators worked out that all this is happening in a plasma so when I talk about all these things like the electron mass I have to talk about the finite temperature finite density corrections etc to the reaction rates Dave Segel pointed out that the nuclear nucleon recoil corrections are quite important and so on so on so basically the cross sections for this this reactions have to be measured they're not measured at the right temperature but you can do all this together so basically the bottom line is this itinerary of events when the time is shorter than 15 seconds the temperature is high enough that basically you just have protons electrons other particles so they are essentially there's no heavy nuclei at that time as in the temperature has dropped enough that deuterium can survive but helium still hasn't formed then you get to three minutes deuterium survives make helium nothing else happens trace amounts of lithium as an aside when Weinberg wrote his book he recounts somewhere at that time the three minutes is of course set by the rate of expansion but at that time people only knew about two neutrinos the electron and the muon neutrino so the actual time scale corresponding to nucleosynthesis was two and half minutes but apparently his publisher told him you know it would sound much better if he called his book the first three minutes rather than the first two and half minutes right and later we did discover the tau neutrino and it's bang on it's three minutes so you know that was very nice so by about half an hour the universe has got too dilute to do anything nucleosynthesis is complete okay so we make predictions and the predictions are based on looking at the abundance of these elements starting from some very high temperature where they're essentially negligible you can see the log scale here you just have neutrons and protons and they're combining into deuterium into helium so what is finally left over is this line of helium which is this where is helium this this cyan line you can see it is growing from being almost negligible it grows up there and that is the final value it is comparable to the leftover protons and the next many orders of magnitude below that you have a little leftover trace of deuterium that is this green line here and even smaller than that there is a bit of this lithium which is down there somewhere okay the this line of so beryllium 7 decays into lithium after half an hour they're the same so this is the first three minutes and what do you have to now do is to put this on a precision footing if you want to actually use it to constrain new physics for example um you know somebody might come along and tell me I have some model with very massive particles which are only gravitationally coupled with the standard model and they only decay through some dimension 5 operator the lifetime is of ordered days or months you know can you say anything about it and I'll say sure if it is days or months then some fraction of them would still start decaying right from the beginning during the first three minutes and they might have a measurable impact on the synthesis of the elements so I have to measure cross sections like this and these are the important cross sections relevant for nucleosynthesis and I have to measure those cross sections and determine therefore the rate at which those nuclear reactions will proceed and you can see that the situation is not very good most of these cross sections are not particularly well measured some of them also are relevant to the solar neutrino problem so that gave us for them to be better measured and then there are large uncertainties so then you have to estimate the error bands by using Monte Carlo methods so basically what you can do is to run a Monte Carlo many many many many times and then get the matrix of output results and try to relate them to the input parameters and this is done by constructing a so-called covariance matrix for those of you who know about these things which essentially encodes all the encodes the change in the input parameters which are generically called y as a function of the sorry the output parameters which are the abundances as a function of the input parameters which are all these reaction rates right and once I quantify them then what I can do is I can give you a nice visualization of how the abundance of any particular element changes according to a change in the cross section for the relevant reaction so you can easily read off from this that this particular reaction is the most important for determining how the abundance of deuterium changes okay this reaction does not matter at all because you see there is a complex chain of processes so you have to work out what is actually important to measure what is important to get tighter values for in order that our predictions can be precise and you can see in general that the lithium abundance has got very large error bars it's affected by a lot of reactions the helium abundance which is here less so okay the actual change in the with respect to deuterium is very very small so what that means is that when I plot these abundances in the traditional plot where the helium abundance is called y for some reason by astronomers and it is shown on a linear plot and the abundance of deuterium and lithium-7 are shown on log plots so that's nearly 10 orders of magnitude and over 10 orders of magnitude I can predict what the abundances should be as a function of the barion to photon ratio which is of order 10 to the minus 10 but I don't know the precise value and you can see that the helium abundance is nearly constant because that is determined mainly by the weak interactions the freeze out value of neutrons to protons but the deuterium and the lithium abundance are strong functions because this is these are now log scales they're strong functions of the barion to photon ratio and these are the corresponding uncertainties so for helium the uncertainty is very very small it's less than 1 percent whereas for deuterium it's of order 10 percent for lithium-7 it can be 30 40 percent now this is all numerical stuff and there is a standard Monte Carlo you can download it and run it on your laptop but it's always nice to get some analytic kind of feel for how things are going and courtesy of these gentlemen demobulus and collaborators we have a analytic description which is very interesting see how simple it is and this will be a good training for you if you want to set up a similar approximation to complicated kinetics in a different context so the rate of change of any species X is determined by the balance between at what rate it is being created and at what rate it is being destroyed right you agree with that the rate of destruction is proportional to the abundance itself because obviously if I want to destroy something I have to hit the bloody thing and you know the number of the particles is the abundance so if I ask what is the equilibrium value the equilibrium value is just j over gamma because in equilibrium dx by dt is zero right that's trivial okay so that was trivial the general green function is not so trivial but I'll leave you to work that out for yourself okay that in general you'd have to have a integral over time of the kernel for this particular reaction here so what they observed was that if this condition is respected if the logarithmic rate of the source and the logarithmic rate of change of the sink are comparable and the such that the difference is much much smaller than the overall rate right then you can achieve a sort of quasi equilibrium even though the system is evolving for all practical purposes it is an equilibrium sufficiently that one can work out what the abundances are without actually having to solve any numerical code okay and freeze out then would happen as usual when gamma is of order h and the asymptotic abundance would be the abundance at the freeze out temperature so that is just the ratio of the source rate by the sink rate at the freeze out temperature of course to do this you have to identify what are the important sources and sinks for any given element or if you're thinking in a different context for whatever it is that you are interested in right but this thing works look at that timeline that I showed you earlier which is from the numerical calculation and now you have this red dotted lines okay which is from the simple iodic calculation they get deuterium etc to within a factor of two and they get helium to within a few percent and it's a very simple iodic calculation okay and that means that you don't have to you know waste computer time unless you are required to by oblige obliged by your contract to use computer time you don't need to waste computer time right just do it analytically and the advantage of this is that I can now use that what they have determined to try to work out things that we are interested in for example what is the limit on the number of neutrino species from nucleosynthesis right so of course we know that there are three types of left-handed doublet neutrinos which couple to the z and its width is precisely measured but there could be other kinds of neutrinos singlets which don't couple to the z okay they could be heavier neutrinos which are kinematically not accessible to the z although that they would be relativistic but they could be other particles they could be other relativistic particles they could be a very strong graviton background and so on so we characterize all that in terms of an effective number of neutrino species and what that calculation would what you have just done tells me is that because I know the time-temperature relationship which is this and I know the rate of change of a given abundance goes in terms of the cross-section as so I can write down a invariant a degenerate I can find a degeneracy log eta minus half log g star is a constant okay which means that on that plot that I showed you earlier a shift in the barion to photon ratio is equivalent to a shift in the number of degrees of freedom right so therefore I can read off without having to do any Monte Carloing etc etc I can just calculate by simple chi-square statistics which all of us know how to work out the number of degrees of freedom in terms of that covariance matrix that I showed you earlier okay this is just a technical issue because there is a code that we gave for this which can be easily used and you can work out how to in fact determine the limit on the number of number of neutrinos from measured values of the abundances so what we need for this is the values of those abundances okay this wise how do you measure cosmic abundances you want primordial abundances okay and today is definitely not primordial we are you know 14 13 whatever billion years after the big bang so the technique that is used and this is something you need to know a little bit about because many of us are interested in limits on new physics so we should really pay attention to the observables from which those limits are deduced so you can have some feel at least for how reliable or unreliable they are right I mean when you get a result from a collider you say there are 3000 guys working on it they probably know what they're doing let's just take it and even then they don't always get it right okay but here you will find that there are maybe 10 people in the entire world who measure the primordial helium abundance which is the quantity of prime importance for the number of neutrinos you measure that by looking at very old galaxies they are blue because they don't have star formation the star formation as sees long ago so the color is not red as it would be for its galaxy star formation quasars allow us to light up well they light up clouds of gas along the line of sight called the laminar for forest and using big 10 meter class telescopes you can actually look for absorption lines due to deuterium in those clouds that has become possible for lithium we can only look at very old stars in our own galaxy which has so-called population two stars they orbit around the galaxy they're part of some primordial spheroidal halo and people believe that they've seen some lithium in them which turns out to be interesting I'll dwell on that because there is a possible anomaly with the lithium abundance it doesn't quite agree with the big bang nucleosynthesis calculation and any number of papers have been written implicating new physics decaying particles to explain the lithium problem maybe when I show you the actual data on lithium you might have a second thought about how unlikely or unlikely that is so first let me show you about helium helium is pretty straightforward helium is made in stars like our sun right but when the sun makes helium it also makes other heavy elements so if I plot the amount of helium that I measured versus the amount of some other heavy element like oxygen or nitrogen I see that it's basically almost flat there is a slight slope you see this is slightly bigger than that but over quite a wide range of oxygen that it has hardly changed so this is suggesting that the helium most of the helium that we see in the universe is primordial we just have to correct for this little extra creation in stars by the way come on this scale the sun would be somewhere out there okay these are very very old stars with very little uh formation of helium now these are some measurements of the helium abundance by two or three groups of people they disagree with each other outside their coated uncertainties okay they're all using different methods because to measure an abundance you have to know the temperature and the density of the plasma that is emitting the radiation you have to know whether it's in local thermal equilibrium and so on right these are this is what astronomers know how to do and they do it but each one does it in their own way and clearly there must be some unknown systematic because otherwise these numbers should agree with each other within the coated error bars and they don't so I have the unenviable task on the particle data group of trying to make sense of these measurements and to recommend some value okay and the value that we recommend is this one here which might give you some thought because it looks very precise but actually the uncertainty we have you see here we have multiplied it considerably over the coated uncertainty by these guys who claim to have determined it as precisely as is possible anyway don't worry about it but I'm saying that there are last systematic uncertainties so we have to do better that the deuterium abundance is interesting this has only become possible because when I look towards a quasar in through this limon alpha forest you see absorption lines due to clouds along the line of sight which are short of limon alpha this is actually limon alpha it's at 6000 angstroms rather than at whatever it is 900 angstroms because this is a highly redshifted quasar okay it's redshifted by a factor of 5 so 5 plus 1 6 times 1 is 6000 so these ones if I blow up one of these lines it looks like that this is the line of hydrogen and in the wing of the hydrogen line there's a little dip here that is due to deuterium the shift of deuterium the isotopic shift is about 80 kilometers a second and to see that you have to be have a huge telescope to see that little guy there right also observe that hydrogen is saturated this strength is so strong that you know you can't actually measure the ratio of deuterium to hydrogen because you can't measure the hydrogen you only see the deuterium and when you do all this stuff which you do with a big telescope like that so this is only possible if you happen to live in california and then have time on the keck telescopes because they only give time on the telescope to happy californians okay and so these guys have managed to measure that little little dip there and they have been able to determine the deuterium abundance but the deuterium abundance seems to be all over the place right I mean this is now a linear scale but still you can see there's at least a factor of two to three scatter this is the abundance in the interstellar medium and it does appear that what we are looking at could have been depleted from some primordial value even though you're looking in a laminal for cloud at a redshift of two or three there are still heavy elements in those clouds there has been some star formation some supernovae deuterium could have been destroyed it's very fragile so we don't know what is the cause of this dispersion what it correlates with this is an attempt to see if it correlates with some heavy element we don't know any of this but very recently Max Patini and company at the Institute of Astronomy at Cambridge they have managed to look at some particular systems called damped Lyman alpha systems where that hydrogen column density can be precisely measured it's not saturated and then they can determine this d over h and they have given us a very precise value okay so if I put that together then I'll be able to determine the barion to photon ratio but first let me tell you about lithium since this has launched a very large number of phenology papers claiming that there is evidence for new physics so in population two stars old stars in the galaxy I observed that the lithium abundance for very high temperature seems to be on some constant plateau and similarly if I observe versus the amount of metallicity of the star I observed that again there's a huge scatter here so our sun is for example here but if I go to very metal poor stars this is the log skin then again I see some kind of a plateau and two French astronomers speed and speed claim that this was evidence that this was primordial in origin right that lithium is very easily broken up by all kinds of nonsense that goes on in stars convection turbulence what have you but if you see a plateau then that is evidence of primordial origin right so if you put it all that together then you have some numbers for deuterium for lithium etc we can't use the helium-3 because that can be both created and destroyed you don't know how it goes so if we take all that and put it on the plot that I showed you earlier this is now what we get that huge yellow band is our estimate of the systematic abundance on the measurement of the helium abundance okay it could be anything we think between 23 percent and 26 and half percent the deuterium abundance similarly sorry it's here the deuterium abundance is much more precisely known that is the measurement in that limon alpha forest and the lithium abundance is somewhere around here and this we can see that this to agree okay with a barion to photon ratio which is something like what is it 2 3 4 5 6 times 10 to the minus 10 and that is the 95 percent confidence range if I do a systematic you know likelihood analysis okay lithium does not affect that too much because the uncertainties are so large that it doesn't affect the likelihood right the reason why we believe that that is the right number is because now we have an independent determination from the cosmic microwave background which is that purple bad bang on right this is extremely fortunate this is extremely fortunate because this is a measurement of the barion to photon ratio I'll explain shortly how it does that at about few hundred thousand years after the big bang that is a measurement of the barion to photon ratio at a few seconds after the big bang so there is no reason why the two have to be the same something could have happened in between right but if the two agree then barring some you know some some conspiracy it must be the same and one then supports the other so it's a cyclic argument but it kind of holds together though it is not a good idea to combine the two because we don't know that they are actually reflecting the same process now this is very interesting immediately because the value of the barion to photon ratio here is much shorter it's sort of 0.02 or something it's much short of what it takes for critical density and even short of what it takes to make up the matter content of the universe which is about in these units about six times bigger right so it is evidence for dark matter it is actually also evidence for baryonic dark matter because the actual amount of baryons we can see shining in light is about factor of three times smaller so most of the baryons in the universe are dark okay at least three quarters of them are dark they're not emitting any light but more interestingly the baryons cannot make up the dark matter because the dark matter is got a baryon matter density which is about six times bigger than this okay so immediately this is telling us two very interesting things this is of interest to astronomers this is of very much interest to particle physicists it's telling us if the dark matter is made of particles they're not baryons right moreover the expansion rate of the universe determines the helium abundance and this curve will become another curve parallel to it above it if I increase the number of neutrinos if I add a singular neutrino or you know new two new massless particles or whatever then this thing can go up and then I no longer have agreement between the helium abundance may not have agreement between the helium abundance and the deuterium abundance so I get a constraint on the expansion rate at one second actually that constraint on the expansion rate at one second is more precise than the constraint we have on the expansion today interesting lithium is out of line with the others and I have said that it is possibly indicative of non-standard physics because actually if I showed you that lithium is inferred very very uncertainly from the population two stars then I don't think that we can really make the case for a problem so I think I've just about enough time to finish up on a few remarks one is how we measure the baryon to photon ratio from the microwave background so this is a whole lecture in itself and I cannot go into that in detail but basically what we see on the last scattering surface of the cosmic microwave background is a snapshot of oscillations in the coupled baryon photon fluid at the moment that the universe turns transparent and in the next lecture you'll see that that this happens very very quickly the ionization fraction drops very sharply so you get a very clean picture if it went on for a long period then the third thing would be blurred out and it is not luckily for us so that last scattering surface has a causal horizon of the order of one degree which corresponds to a multiple of about 200 so most of the action is happening at this kind of multiple when you expand the temperature pattern into spherical harmonics and for our purposes here forget all that the only thing that matters is that if I add more baryons they load those oscillations so that has the effect of increasing as you see as I increase the baryon density the height of this first peak goes up and the height of the this guy here goes down okay so the ratio of these two gives me a measure of the baryon density which is reasonably certain it depends a little bit on the slope of the primordial power spectrum but not very much okay so it then gives me a measurement of the baryon density which subject to some small caveats is actually more precise than the one from the baryon density from from nucleosynthesis of course this is indirect we are not directly measuring the baryon density you are inferring it in terms of a model so the fact that it agrees with the nucleosynthesis picture which is based on hardcore basic physics that is very reassuring so when you do this measurements you get so this is from the last final number from w-map plan gives a similar number it is not very important but this number is the one that you saw quoted in the last slide as the tiny little band okay and so to end what you have then done in this lecture is to reconstruct our thermal history starting from what we see around us which we now see from the inverse of the Hubble expansion rate today is about of order 15 billion years because the universe is expanding at about 70 kilometers a second per megaparsec today inverse of that is that number we have traced our expansion back by observing that the temperature is just related to the inverse of the scale factor so a times t is roughly constant apart from little glitches that don't show on this when the electrons and positrons annihilate and so on and what we have seen is that if you go back in time to about a few minutes after the big bank that is the earliest epoch that you have considered that's when the nuclei were made and we therefore have reliable knowledge of the universe using this adiabat that you have constructed back to about here we have relics beyond this point we don't have any relics we don't have any relics from the quark hidden phase transition or the electric phase transition unfortunately there has been a lot of talk of you know possibly seeing quark nuggets or gravitational waves or you know there's always hope we might see something but to date we have no knowledge we have no relics of the early universe from these phase transitions we only have relics from the early universe in the form of dark matter in the form of the baryon asymmetry and in the form of the density fluctuations that grew under gravity in the dark matter to give us large scale structure when they were generated we don't know we have no idea about what time they were generated but certainly it was before nucleosynthesis because to do nucleosynthesis you needed the baryon asymmetry already right the expansion rate of the universe is precisely determined in nucleosynthesis it is consistent with the radiation dominated universe directly doesn't tell you about dark matter or about fluctuations but we know that in order to create fluctuations on scales outside the horizon you would have to have gone to some very early time when that could have been done it can only happen at the transition between radiation and matter domination or during a decider phase of expansion these are the only two times when you can create fluctuations outside the horizon so it's some early time we don't know when that is and we also don't know when the dark matter was created but as you'll see in the next lecture it was not too early dark matter typically of around the weak scale or lighter was typically created around the weak electric scale or below because the freeze out temperature of a massive particle is about the 20th of its mass if it's weakly interacting so if it is a TV mass particle then it happens somewhere here so basically this is the epoch where dark matter decoupling occurred we are reasonably certain of the thermal history then not entirely certain okay you could invent all kinds of things that happen then and I can't actually rule it out if I want to actually rule anything out you have to do your stuff after a second after the big bang one second after the big bang we are sensitive to anything that goes on and in a machine like the large detectors at the LHC you're sensitive to anything that happens within a microsecond after a microsecond any the detectors are only about 10 meters big so they would have escaped from the detector so between one microsecond and one second we really have no constraint at all on unstable particles but the furthest that we can actually see directly is back to one second after the big bang and as I said all this stuff was created much earlier so I think that is my last slide yes so thank you thank you why do we neglect the chemical potential in the your early universe why do we need a chemical potential why do we neglect light no no uh how do we say neglect right why do you neglect the chemical potential in the early universe because it's very very small it's like 10 to the minus 10 but how do you know what do you mean how do you know because we what if there was any chemical potential by definition it is a conserved quantum number so it's conserved till today right so what we see today is what was the excess in the early universe so the same baryons that you and I are made of that's 10 to the minus 10 in the early universe there were many many many more baryon anti-baryon pairs they don't contribute anything more right but also the density was very low so you may have seen those of you who are doing heavy iron physics they usually plot the chemical potential versus the temperature right the difference between what you do in a star or in a accelerator etc you're looking at a very different regime you're looking at a high chemical potential of order one right in the early universe you're looking at a chemical potential of order 10 to the minus 10 but conversely in the early universe we have a temperature okay at machines today we don't really have a temperature that's a little misleading statement we do for publicity when you say we recreate the conditions of the big bang we don't really do that because you don't really create a thermal environment except in if you call gold on gold or something if you collide then maybe you can get a thermal environment you certainly don't get it if you collide protons and protons right but it sounds good right thank you have two questions one is how can you know that everything was in thermal equilibrium I mean somehow you look at very very short time scales so would imagine doesn't have enough time to thermalize essentially the calculation we do we look at the time scale which is the scattering inverse of the scattering rate right and we compare that with the expansion time scale which is the inverse of the Hubble rate so correspondingly if I work out that my scattering rate is greater than the Hubble rate that means the inverse of the scattering rate is shorter than the inverse of the Hubble rate that's exactly what we do right so doesn't matter how short the time scale is is a matter of the relative comparison of those two numbers and as I showed you you do get this surprising kind of effects such as that the scattering rate will in fact not be able to keep up with the Hubble rate if I go to sufficiently high temperatures which is where was it right so for example what you are saying is true if I go to a temperature above 10 to the 14 gv the Hubble rate is so large right it's going as t square over the Planck scale right how is it in time do you know this we work it out I told you it is one second at one mev so 10 to the 14 gv is 10 to the 17 times one mev 10 to the 17 square is 10 to the minus 34 so it's about 10 to the minus 34 seconds and the second question is very short you mentioned that many that the electrons annihilate the photons very effectively so that's not a nuclear process so why is it so so much not time invariant what do you mean time invariant I mean why does it have such a strong error of time that you have an abundance of electrons and then you only have photons well electrons and positrons have electromagnetic interactions so the cross-section for their annihilation is of orderly thompson cross-section it's a very large cross-section it's just the strength of the interaction that determines how strongly the annihilate try putting some electrons and positrons together they'll be gone before you know it they annihilate very very strongly because well to be precise they annihilate electromagnetically I'm using the word strong here in the natural language sense so they will wipe each other out there'll be nothing left okay thank or to be more precise the time scale for their annihilation is shorter than the time scale for the expansion of the universe that is always the actual clock everything we should say in comparison to the expansion time rate okay thank question okay yeah maybe another question related to this slide so you say the the the maximum temperature cannot exceed this 10 to the 14 yes gv but I'm still a bit confused because on the next slide if you see these these nice plots then I think they're they extrapolate too much higher they shouldn't extrapolate higher sorry I didn't I didn't make that plot they should not extrapolate higher but so you say basically everything above 10 to the 14 above 10 to the 14 there is no thermal equilibrium so you should we should take that part of the plot with a grain of salt and and that's yeah but you know people want to show the quantum gravity pork etc etc I'm saying there's no thermal equilibrium at that scale okay actually there is no thermal equilibrium for other reasons also that you have to have enough particles within a horizon to actually have equilibrium you'll find that as you approach the Planck scale there is less than one particle within a causal horizon so if the particle has nothing to scatter with then what is the equilibrium so you see we don't we should think about these things when people formulate the horizon problem they integrate the distance lighter spread from t equals zero t equals zero is the quantum gravity scale we don't even know if there's a metric then so these are all very loose ways of formulating problems they don't bear hard scrutiny just to be sure that I understood well since you can talk about temperatures above those good scales we can't talk about temperatures above regard scale that's what I'm saying okay but in the diagrams they keep theirs too yes yes so well okay so as I just said to this guy here those diagrams are constructed they're not quite accurate so you know even if I can imagine any other theory with more particles fundamental particles in different kind of interactions no you have to change your coupling so the only thing that is going in there is the strength of the coupling you know that the coupling is asymptotically free the highest coupling you can contemplate at high temperatures is some number like that okay I mean I won't argue with you if you tell me it's one by 20 and not one by 24 but it's some number like that okay you're not going to change that remember oh I forgot to mention this very important thing I can do this whole business only because the theory is non-abelian okay it's only because of asymptotic freedom that I can actually talk about a weak dilute gas at high temperatures right in the old days in 1960s people thought that the pion mass was the highest value of the temperature right because that was the Haggadon scale right above that you had a strongly coupled system you would not be able to have an ideal gas and then we you know asymptotic freedom was discovered the fact that QCD actually becomes a very simple force okay then you free everything unconfined only because of that we can do cosmology without that you would not be able to do cosmology thanks thanks for bringing that up I should mention that summit because this is not sufficiently emphasized okay just one more yeah maybe it's naive but you talk about conservation of entropy for the universe but it's not a closed system how can I imagine that if it is not closed what else is there wow I don't know the universe by definition is everything that there is so it's not a it's not a matter of infinite huh even if it's infinite doesn't matter I can always construct a box so we have theorems like book of theorem and so on which means I can ignore the rest of it if I just consider a box I put some photons into it if I expand it the entropy of the photons in that box is what I'm looking at right you are talking about whether that box can exchange entropy with some other bath I'm saying there is no bath my my preoccupation is because we can see only part of the universe you can talk about it but not the whole universe right true but in this right in this room you and I can't talk to the rest of the universe we can only talk to what is bordering us right so I'm saying always focus on local physics local physics is all that we know about and all that matters in practice right I don't care what the universe is doing far away I know that locally space time is what it is I can do measurements here I know things are conserved here so for example energy energy is not a well-defined concept in general in particular it is not a well-defined concept in the Friedman Robertson Walker metric right because in order to define energy you should asymptotically have minkowski space time Friedman Robertson Walker metric never becomes minkowski so energy is not a well-defined concept right that does not mean that is this is true right and there is a paper by Witten not not young Witten his father Louis Witten he wrote a nice paper about this the point is that it doesn't matter whether energy is a well-defined concept in general relativity for practical purposes energy is bloody well conserved in this room okay in any experiment that I do so the fact that I'm unable to define it properly because I don't have the right boundary conditions at infinity should not prevent us from believing in what we know is true right so it's an interesting issue bears thinking about thank you neutrino species I know that there's this limit from from a nucleosynthesis but there's also this limit from the amount of radiation in the CMV at recommending combination that's right so that's what I was showing you that the the if you had the number of neutrinos larger that would delay the point at which the universe became matter dominated right and the microwave background is sensitive to the scale the scale factor at the transition between radiation and matter right so it's something called the early integrated saxophone effect and that can is basically well did I have a plot of the energy spectrum I have not discussed that in this lecture but I can tell you very quickly this plot here so the early integrated saxophone effect which is not shown here would basically affect the shape of the spectrum here and over here okay in this region l of 22 about 400 and in principle that is sensitive to the precise point at which radiation dominates over matters if I add extra neutrinos then that would delay it it is also degenerate with this other quantities I can fake that effect by changing something else okay the CMB is a convolution of many many unknowns there is a spectrum of the primordial fluctuations the baryon density the matter density the scale of the horizon at the point where matter domination occurs and so on right so if I can fix those other quantities by some independent means then I can read off the radiation content and when they do that these days they claim that they can determine that the radiation content of the universe is more than photon they can determine the number of degrees of freedom and it is something of order 3.5 something plus minus one okay so it is consistent with having three neutrinos and photons right some people argued that there was evidence for extra there but I think the systematic uncertainty is far too large for that to be the case and in any case Planck has now the Planck analysis says that everything is consistent with the standard model and that number is a stronger bound than the one coming from nuclear synthesis right no it is not a stronger bound for the number of neutrinos I would I would trust the nuclear synthesis bound as more secure because it is a direct measurement of the expansion rate okay at the point where uh nuclear synthesis occurred at the very same time the CMB effect is more indirect right but correspondingly the measurement of baryon to photon ratio from the CMB is more accurate as you saw than nuclear synthesis so neither of these two numbers have to be the same you know but as it happens they turn out to be all consistent excuse me uh the entropy is low is large at the early universe but uh at the later time tend to allow value no entropy is the same we are considering a conserved system so the entropy does not change in the real you might think that you have you know are you referring to the second law of thermodynamics or something right normally entropy is supposed to increase right your room gets more disordered it never gets more ordered by itself right but in the real universe the entropy is so large that any increase of the entropy due to structure formation etc etc is completely negligible compared to the amount of entropy you already have okay there are 10 to the 9 photons per baryon that's a huge amount of entropy already we can't okay in principle if you put all those baryons into black holes then the entropy could be even bigger so actually in some sense the entropy is very large and in another sense it's very very small depends on how you look at it I mean that we uh we start with the disorder system and no no who said you started with the disordered system we have to have started with the extremely ordered system in order to give the universe that we see otherwise it would not have expanded smoothly and uniformly it would all have collapsed into black holes or something I mean that is the argument Roger Penrose makes right you have to have the initial so-called wild curve which are exactly zero in order for that to happen he claims so discussions of entropy in the early universe are a little even to make them very very carefully by saying what is it with respect to that you are measuring it right so as I said in from one point of view I can see that the universe is just a gas of photons and for every billion photons I see there is one particle of matter right and therefore to very good approximation is just a photon gas the entropy of a photon gas is conserved okay those little particles of matter might form galaxies or do something or whatever but they generate such a tiny amount of entropy it makes no difference it is one part in a billion okay I can neglect it I can look at it like that but I could also look at it that in order for this system to be like this in order for it to be a photon gas with those particles just moving away from each other in the Hubble flow the initial conditions have to be extremely finely tuned to make sure that those particles didn't all conglomerate under gravity and make black holes which have the maximal entropy in the universe right that needs very special initial conditions so we could discuss this for a long time preferably over a beer but but it is not something that we are concerned about here we are talking just about the ideal gas the entropy is conserved right because there are no sinks like the gentleman asked earlier there is no other source to which you are coupled we are just considering an isolated system right and therefore there is no way the entropy can change so to repeat one more time we are constant entropy we are in quasi equilibrium nonetheless we are able to study non-equilibrium processes like chemical synthesis because we are using all kinds of physics tricks to simulate it as quasi equilibrium that's because we are physicists we always like to take shortcuts you know when you can get a simple answer if you want to spend your life doing the full non-equilibrium complicated finite temperature chemical potential problem you can do that but you'll get the same answer so why bother let's there are more questions last chance no then we thank Sweden again