 Okay, welcome back. We can start the course on Cosmic Microwave Background, and we have Raffer Flauger. Okay, everyone can hear me? Yeah. So, thanks so much for the invitation. I was here as a student twice, but never for the cosmology school. I was here for two of the string theory schools, but I really enjoyed my time in Trieste, and I hope everyone's having fun. So, I was asked to talk about the Cosmic Microwave Background. I feel a little bit bad because at least today I'll only have slides. I don't have a nice blackboard talk like Enrico did. I'll eventually try to do some of the things on the blackboard, but at least today and also in the beginning tomorrow there will be a lot of little images and it'll mostly be about the history of the CMB, so it's probably better if I show them on the slides over drawing you some sketches. So, the outline for today, the beginning is very basic. I'll just give some short review of general relativity, and I call it part one because it's just the homogeneous universe that I'm using. And then I'll talk in some detail about the prediction or why there was an expectation that there should be a Cosmic Microwave Background to be discovered. And then I'll talk about the measurement and about the history of the measurement, and eventually I'll talk about the spectrum, the blackbody spectrum and deviations from the blackbody spectrum that you might hope to measure at some point. And then tomorrow I'll finally talk more about the fluctuations that Enrico already talked about, but today will be essentially just the homogeneous universe. So this will also be the beginning. And I did look through the slides and I realized that you've seen a lot of these things already, but so I just wanted to make sure everyone's really on the same page. So I'll start really from the beginning just by showing you the original heading from Einstein's paper from 1915. And I think everyone knows that the basic idea behind general relativity is really that you should no longer, as in the Newtonian theory, think of gravity as a force that's acting on two bodies at a distance, but you should think of gravity as arising because space-time is curved and the curvature of space-time arises as a consequence of the matter distribution. So this is what I'm showing here in this little sketch. I'm also a little bit embarrassed by my drawings given that we have the fancy animations in the previous talk on gravitational waves, so this will be a theorist's kind of attempt at visualization. So this is the usual thing you've always seen. So the basic question that I saw in some of these slides was also already covered. The first question you might ask is, well, how do you encode the geometry of the space? And really, in principle, there's different ways of approaching this, but everyone in cosmology is following, everyone in physics for the most part, is following what Riemann taught us in 1854 in his Habilitation. This is like another version of a PhD thesis and this is where Riemann developed what we now call Riemannian geometry and the idea there is that the geometry of space-time is encoded in a line element, which already in his work was denoted in this way, so we're using the same notation. And then what the line element is supposed to encode is the distance between two nearby space-time points. So this is a somewhat pedestrian definition, but it's good enough for our purposes. And so the idea is that if you have a point with coordinates x and y and you have another point with coordinates x plus dx and y plus dy, then in flat space you know that the distance between them is given by a Pythagoras, so the distance between them is the square root of dx squared plus dy squared or dx squared, so the line element of flat space is just dx squared plus dy squared. For the two-sphere, it's somewhat more interesting, so now you might be interested in the distance between a point with coordinates theta and phi, so theta is this angle here and then phi is the angle around here. And if you're interested in the distance between a point with coordinates theta and phi and theta plus d theta and phi plus d phi, then you know that the ds squared is sin squared theta d phi squared plus d theta squared. So these are the basic, the simplest spaces you can think about. You can also look at various other spaces. I'm sure you've seen the Schwarzschild metric and so on. And so these are the four spaces, I mean with all positive signature in general relativity. The only difference is that we now also have a time and in the Minkowski version, you know that the line element is just minus dT squared plus dx squared. I won't really go into detail because I'm assuming everyone has seen this and has taken some course on special relativity. If that's not true, then you can just ask me or anyone else later. And so here what's new or what isn't the case in the Euclidean or the Romanian geometry is that there are points that are null separated. So there's points that you can reach from x by sending out light that have ds squared equal to zero. For cosmology, what's nice is that we're really only interested in very simple geometry. So at a given time slice, our universe schematically looks something like this. So there's a bunch of galaxies. So these are supposed to be typical co-moving galaxies. And then you can measure the typical size between these galaxies. I'll call that A1 and then you can make a grid. So you could have zero, one, two, and so on. And then the distance between this one, between these two points, is just again dx squared plus dy squared times the typical distance between these co-moving galaxies. And you might want to look at the same picture at some later time. So we know from Hubble that the universe is expanding. You might look at a later time. So then the only thing that has really changed is the physical distance or the typical distance between these co-moving galaxies. And so you know that the line element in general relativity for the flat FLRW universe is of this form. And it just describes the galaxies flying apart. So this is the basics that I think everyone is familiar with. More generally, the universe doesn't necessarily have to be flat. So in principle, even if you assume that it's isotropic or maximally symmetric, the slices, you can allow for the universe to be either closed. So it's a closed three sphere or it could be open. So you have a hyperbolic three space or it's flat. This is what we saw. And then Rico already mentioned and you've heard it many times that all data points us to a flat universe. So the universe is flat to a very good approximation. And so I'll restrict to the flat FLRW metric. Now this means really that the geometry in our universe is encoded by the scale factor A of T. And you also saw in the metric that a rescaling of the scale factor is unphysical. It's just a change of your coordinates. So the physical quantities, at least in a spatially flat universe, should really be independent of the normalization of the scale factor. And so there's typically you introduce either the ratio of scale factors at different times. So for example, the scale factor at some time T divided by the scale factor at the present. And we'll see in a little while why I'm calling this the redshift. But this quantity is usually also called 1 over 1 plus Z. And then for the other interesting quantity you can look at is the fractional rate of change of the scale factor, which is the Hubble rate or expansion rate of our universe. So these things I'm sure are familiar to everyone. The relation between the matter content and the geometry is given in terms of the Einstein equations or Einstein field equations. You saw them also in Enrico's talk. And so on the left you have a geometric quantity. So you have the Ricci scalar, which is constructed from the Christoffel symbols. So you start with some metric, some geometry. You can compute the Christoffel symbols, compute the Ricci tensor and the Ricci scalar, which is the trace of the Ricci tensor. And then this quantity is supposed to be determined by the matter distribution that you have. The cosmological constant you can either think about as a cosmological constant like I'm doing here, or you can move it to the other side and think of it as vacuum energy. It's really up to you. The reason this combination appears is because you want the matter density, or the stress tensor for the matter to be covertly conserved. And this is a quantity that's covertly conserved. In fact, if you look at some of the old papers, there were papers or attempts to use not the Einstein tensor, but to use just the Ricci tensor and then you run into contradiction. So this is what fixes this combination. And also, and Ricci pointed this out already, they can be obtained from the Einstein-Hilbert action from a variational principle. So I don't know how to ask, but has anyone not seen these pieces just to gauge some idea, or has everyone seen them? Could you maybe raise your hand if you've derived the field equations from this action at some point? Okay, so just as an aside, so it's not something I want to show and it's also not relevant, but if you start with a theory in flat space, let's say with a massless spin-2 particle, then you can convince yourself that at low energies, the action describing the system will always be of this form. So this is the unique action to some extent for interactions between massless spin-2 particles and matter up to higher derivative corrections, which are negligible at long distances. So they're certainly completely negligible in cosmology because the scales that appear in the higher dimensional operators, you would write as some microscopic scales. They might be the string scale or the plank scale, some higher dimensional, some scale that suppresses them. And the typical curvature in cosmology of order of the Hubble scale. So these are suppressed by Hubble over and string or in plank to some power. This is true at late times in what Enrico is talking about. It's not entirely true, so you might be potentially interested in some of these corrections. But for the talk today, I'll be interested in times from nuclear synthesis to the present roughly. And so there you really don't have to worry about these kind of corrections. And this is really the unique action. And then you can just evaluate it for the metric that we have and work out what the equations of motion are. So the 0, 0 component just gives you what we call the Friedmann equation. So it's h squared is 8 pi g over 3 times rho. And then the i, j, the spatial part of the Einstein tensor or the field equations gives you an equation that looks like this. It's 3h squared plus 2h dot is minus 8 pi g times the pressure. Typically we use this one and then instead of using this one, we're converting these two equations into the Friedmann equation and the energy conservation. So this is easy to see, you just take another time derivative of this equation that you get 2h h dot is equal 8 pi g times rho dot and then you can just get rid of the h dot in this equation, you get a continuity equation. And if you assume an equation of state of this form, so this is by no means the most generic equation of state, but it's something that naturally arises if you have pressure less dust, meaning non-relativistic particles, or if you have radiation in which case it's 1 third, then you get an energy density from this conservation law that red shifts like 1 over a to the 3 to the 1 plus w. So this is the... If you have a more interesting situation, so we have not just one component, but you have let's say some matter with p equals 0, some radiation with w equals 1 third and then a cosmological constant with w equals minus 1, then the Friedmann equation looks like this and the energy density is just redshift like 1 over a cubed for the matter, 1 over a to the 4, because the wavelength of the photons is also redshifting and then it's constant for the cosmological constant. Now, one thing that we'll still need, I said I would show you why we call it the redshift, is how do particles move in this background. So you just have a probe particle now, not back reacting or taking the back reaction of this into account. You just really are interested in some electrons or some protons, some gas of photons. And so the action of these point particles is described by the world line action which just measures the invariant length of the curve you're drawing and if you vary this, you find the geodesic equation. These particles move along geodesics and in the FLRW metric you find that it looks like this, so it's easy to work out. I could ask again if everyone's derived it but maybe I'll save it for now. If you haven't derived it, you should do it as an exercise. If you've derived it, that's good. And you find that the momenta redshift or the dxI by the tau redshift is like 1 over a squared and so if you're looking at the momentum of a particle which for a massive particle is defined in this way, then you see that you get a factor of a squared from the spatial part of the metric and then you get 1 over a to the 4 from this so you get a 1 over a squared under the square root or you find that the momenta of particles redshift like 1 over a. Here I'm specifically looking at massive particles because the world line action for the massless ones is a little bit more complicated but it's easy to generalize and it remains true for the massless particle so the momenta as the universe expands decrease in particular, they decrease like 1 over a so if you look at the momentum of a particle today then you find that it's the momentum of the particle at the time it was produced times this redshift factor so if it was produced at some early time then we observe it with a momentum that's less than 1 over 1 plus z than the momentum with which it was produced so this is why I call this the redshift factor so the momenta of this quanta just redshift and this is also what explains the 1 over a to the 4 for radiation so you just have an extra 1 over a in the redshift in the relation between the energy density and the scale factor so this was the lightning review of general relativity and hopefully you've seen all this before if not you can ask around now what I want to do is talk in some detail about why there was an expectation that we should see a cosmic microwave background or some bath of radiation and so you should to some extent forget what people told you in the earlier lectures and imagine that you don't know anything about cosmology and try to go through in the same way that people at the time discovered that there should be this bath of hot radiation and so what people were studying at the time was the question what is the origin of chemical elements and so we explain the origin and at the time there was an idea that somehow there should be some equilibrium state during which these heavy elements are produced but it was in 1946 that Gamma pointed out that if you extrapolate the expansion rate of the universe backwards to the energy density that you need to produce these heavy elements the universe is actually expanding very rapidly so this is just taking the equations we had on the previous slide extrapolating them backwards to the energy density you need for these nuclear reactions and then he writes that the conditions necessary for rapid nuclear reactions were existing only for a very short time so that it may be quite dangerous to speak about an equilibrium state which must have been established during this period so he's saying it's really unclear that there was an equilibrium state and maybe you should think about it as a non-equilibrium process and this is what Alpha, Beta and Gamma studied in their famous 1948 paper so here the idea was that all the heavy elements were formed by neutron capture so the idea was that you start with a universe that's filled entirely of neutrons I'm not entirely sure why this was the assumption people made but somehow the intuition was that the early universe is somewhat like a neutron star so the pressures are so large that somehow the electrons get pushed into the nucleus this is at least what they write into the paper we now know there's weak interactions and eventually they also figured that out but they had the assumption that the universe started entirely filled with neutrons some of them decay, you get protons, some of the neutrons get captured on the protons you form deuterium, you capture an additional neutron and so on and the idea was that you really form all the elements in this way by capturing one neutron at a time so you have the rate of change of the abundance of the ith so I here runs over the atomic number so this is the ith element the rate of change in it is some proportionality factor with scale factors and so on but it's given by the cross-section for neutron capture times the number density for the nucleus with atomic number i-1 and then minus the rate at which the ith nucleus decays so this is the system of equations they studied in their paper and around that time there were measurements of neutron cross-sections of MEV neutrons and they used these measurements of the cross-sections and with the cross-sections you can solve this system of equations and you can compute this quantity so it's something like the neutron number density integrated over the time the process took place and this is really the only free parameter in this system with the initial conditions where you start with all neutrons, all the other nuclei are zero so then the only thing you really need to know is essentially how many of these neutrons you had equivalently instead of the number density you can write it in terms of the energy density by relating the energy density to the number density by multiplying by the mass and so what you can do is you can compute this quantity and adjust it to fit the abundances of the heavy element so this is what they were doing in the paper so these were the measurements that came out around the time and then there was a fit from the equations I showed you so they run them and then they fit them obviously now we can do this in five minutes or maybe less in Mathematica at the time it was substantially more difficult to do these kind of things it turned out so we'll talk about this more I mean there's a number of issues in doing this one thing was that in the paper there was a numerical mistake so the number they actually quote in the paper is off by some 10 orders of magnitude the second one is that we'll discuss in more detail is that at the time they were fitting to the abundances of heavy elements we now know they didn't really form in this way and we'll explain why they didn't form in this way so Alpha eventually corrected the numerical mistake so I don't quote the wrong numbers but once you do this exercise you do it correctly self-consistently you find a number that is something like 10 to the 18 seconds per cubic centimeter and what you can do with it is you can try so you can compute the number of neutrons on the one hand you know that we started with a universe that's currently only filled with matter so you have a matter dominated universe which means you have an energy density that goes like 1 over 6 pi g times t squared and the energy density here is the energy density and let's say protons and neutrons the number density of neutrons you have an additional factor because they decay and so you can integrate this equation and you can estimate from the value that you need the time at which nuclear synthesis should start for this process assuming that the process takes about the same or is of order the duration is of order the neutron lifetime if you do this estimate you find that the start time for nuclear synthesis is about 10 to the 4 seconds and this is problematic maybe I can ask why it's problematic it's kind of on the slide so it's not too well organized so the problem is that the neutron lifetime is around 880 seconds and so in this type of cosmology you would have all the neutrons decay so in other words the universe would be filled entirely with hydrogen which is a universe that we just don't live in and so Alpha at this point points out that a universe in which you have a hot radiation around not just hot neutrons actually provides a way out because the universe is expanding more rapidly the nuclear synthesis starts earlier and it starts at a time when you still have neutrons around and you can generate the heavy elements so this was one of the reasons that people thought that there should be this radiation around one of the problem as I said with these elements was that there is a gap at atomic number 5 and 8 so there's no stable nuclei with atomic number 5 and 8 and so you cannot really generate all the heavy elements in this way you cannot cross these barriers Alpha somehow never gave up on this idea and you always thought that this would somehow happen there were lots of papers written about trying to increase the densities so you have three body interactions which we know happen in stars and so on but in the early universe this just doesn't happen so the first paper that in some sense was on the right track and was a paper in 1948 by Gamma which points out that well before the heavy elements form you certainly have to form deuterium along the way so you first want to capture neutron on a proton to form deuterium before this happens nuclear synthesis certainly cannot happen so then the question is when do you start forming deuterium and the estimate in this case I mean you estimate the rate of neutron capture on protons and you want this rate to be comparable to the Hubble rate if you capture too few of them then you will not form deuterium so you can do this estimate Hubble goes like 1 over T so you can rewrite this equation and estimate this number again this is the analog of the integral we had before and what was known at the time were the nuclear cross sections for neutron capture on hydrogen and also the typical velocities and you get an estimate of this order of magnitude so this is again in a universe still that's only filled with matter but it's at least an estimate that conceptually makes sense it's no longer an estimate based on fitting to the heavy elements which were not produced in this way but it's just an estimate based on when deuterium actually has a chance to form and again if you're in a matter dominated universe deuterium would form at times later than 10 to the 4 seconds again at a time much later than or longer than the neutron lifetime and so you just would end up also in this scenario in a universe filled only with hydrogen so based on this you might think ok so maybe there was this black body radiation around it early times from the current point of view maybe it's difficult to understand why there wouldn't have been hot radiation around at the time because there's plenty of interactions that actually produce photons and so on but at the time it took some iterations to come to that conclusion and at the time based on these ideas they concluded that there had to be this black body radiation and also estimated the temperature and concluded that today there should be black body radiation with a temperature of around 5 kelvin around and this very close I mean obviously we now know that there's black body radiation with 2.7255 kelvin around from various C and B experiments but so some of the estimates still were not entirely self consistent I mean once you now add this hot radiation it doesn't so much matter when you first form the deuterium you want to understand when you first have an appreciable number density of deuterium and at least at early times if you have a lot of radiation around the radiation will disintegrate the deuterium as soon as it forms and you're mostly still in a system where you have protons and neutrons so the beginning of nuclear synthesis is really when photo dissociation becomes too inefficient for the deuterium to be destroyed and you eventually then have time to capture additional neutrons on the deuterons the first so here I'm showing you from then on the first what one might call careful or some more modern study of the formation of the light elements in the hot big bang was by Fermi and Turkevich this was never published because apparently it used cross sections that at the time were classified because this came from the these cross sections were part of the Manhattan project but eventually they were declassified Fermi and Turkevich never wrote it up but they gave the work to Alpha and so you can find it in a review on nuclear synthesis by Alpha so you see that here you have a large number of reactions that were included one of the things that was still done at the time was that the universe assumed to be starting in a state that's all neutrons and it was pointed out in 49 by Gamow and Hayashi that really there are the weak interactions that convert neutrons collisions of neutrons with neutrinos convert them into protons and electrons and so on so there's really a thermal equilibrium between the neutrons and protons and it's from then on that we have the right initial conditions for the modern nuclear synthesis calculations and so what you have or what you can write is the number density of neutrons is exponentially suppressed we have this from the neutron decay and you get something that's so this is just the mass difference between neutrons and protons and you can compute this ratio to be 0.16 times the decay factor here and this is what you have until these interactions become efficient enough and this happens at temperatures of around 10 to the 9 Kelvin and so you can just plug that in and get a helium mass fraction so this is the 4 it's just because you have 4 nuclei in helium so you compute the mass fraction of helium relative to the total mass fraction you can easily convert this into this formula and then you just plug in the time at which nuclear synthesis happens so when these processes take over so these predictions so at that point you have a fairly modern computation of nuclear synthesis but these predictions were largely forgotten because it became clear that the heavy elements couldn't have formed in this way because of the gaps at atomic number 5 and 8 and it also became clear that nuclear synthesis in Starris could explain the abundance of heavy elements so this was mostly due to the work of Hoyle and collaborators and so then the idea was if you can form the heavy elements in the Starris maybe you can actually form all the elements in the Starris so some of these things just were forgotten what makes it somewhat ironic is that at the time there actually was evidence for radiation at a few Kelvin from measurements by Andrew McKellar around 1941 and he did give talks about it and it's at least documented that he gave a colloquium that Gamow attended and Gamow also apparently requested to talk to him after the colloquium so it's not clear exactly why this was missed so it's not clear in other words what they talked about presumably not actually the black body radiation that both groups somehow were talking about and so then as I said there was a lot of progress by Hoyle and collaborators and this also started some of the modern nuclear synthesis calculations but he pointed out that nuclear synthesis in the Starris can explain the abundances of heavy elements and we know that this is how the heavy elements formed but it actually cannot explain the abundance of helium there's too much helium we wouldn't form all the helium in the Starris and this is something that Hoyle pointed out in his paper and so from then on it became, was taken more seriously but still not seriously enough maybe so people didn't put all the pieces of evidence together and it was in 1964, Dickie at Princeton asked the question if you have a bounce could you set up a hot universe that expanded and that still has enough radiation around or hot radiation that we can detect today so this was rather unrelated to all the calculations so the group here really was unaware of the nuclear synthesis calculations and then Jim Peebles independently worked out the nuclear synthesis calculations again so he worked on the theory and then there's Roel and Wilkinson working on the microwave radiometer so here I'm showing you the picture from the top of the building with the radiometer and at the same time this is I think relatively well known to everyone so there was Penzias and Wilson who were looking, I mean had their antenna and couldn't explain why there was excess antenna temperature as everyone knows I think there were pigeons that they were trying to get rid of and thought maybe the pigeons could be the problem but eventually became clear once they communicated with the Princeton group so this wasn't in a very direct way even though it's very close by so it's just 30 miles away you can just drive from Princeton to the antenna in very little time but it was in very indirect ways so Jim Peebles gave a talk about what they were working on at Princeton and then Kent Turner who attended the talk talked to Bernie Burke who then talked to Penzias and Wilson so it was quite indirect and eventually it was obviously clear to the Princeton group that what Penzias and Wilson had seen was the radiation they had been looking for and so there were these papers back to back so there was the measurement of the excess antenna temperature by Penzias and Wilson also famous with the modest title and then the interpretation of this radiation in terms of cosmic black body radiation by Dickie Peebles' role in Wilkinson and obviously as soon as you claim that you've detected cosmic black body radiation you have to actually show that it's a cosmic black body so here was only really a measurement at a single frequency so you want to make measurements at many more frequencies and what was good to some extent is that this measurement was at a different frequency it's not too conclusive because you see it's in the new square part of the spectrum but here are the two data points that one had at the time so this was the Penzias and Wilson measurement of the radiation of around three Kelvin and the Princeton measurement shortly after that with three Kelvin which confirmed this measurement but obviously still was short of showing that this was a black body so you really want to get to higher frequencies to convince yourself that what you're seeing is a black body spectrum and the satellite that everyone knows measured this spectrum this black body spectrum of the cosmic microwave background radiation was proposed in 1974 a long time ago and took a long time to build and to fly and to analyze but this is the proposal for Kobe by these people and then everyone I'm sure has seen the beautiful measurement of the black body spectrum by the Kobe Firas instrument best black body spectrum that we've measured so this is their measurement one thing that's often or what's less known is that around the same time there were other people actually trying to measure this in particular there were attempts by Herb gosh and his collaborators here's a paper from 1973 where their measurement didn't work so they were building detectors and putting them on sounding rockets they had these flights they tried to detect it here they failed to measure something because of contamination by radiation from the earth there were also some things when you were still seeing exhaust from the rocket and so on so it's not as clean a measurement as the satellite measurement but people were trying for quite a while so since the early 70s and they also succeeded eventually but just a few months after Kobe measured the black body spectrum this is the measurement from the group around Herb gosh so this is Herb gosh, Mark Halpern and Ed Vishno and here you see one of the images so this is another measurement that's often forgotten but I think it deserves more credit than it got so it was just a much smaller group of people and both of them were working on it for, well, 16, 17 years so now let's say a few more words about the spectrum and so what are the processes that ensure that the cosmic microwave background is actually a black body and here of course the obvious processes are in the processes where the electrons scatter a photon so you have a Compton scattering or the photons scatter off electrons where a Compton scattering so here you can exchange energy here there's also at higher temperatures a double Compton scattering is efficient so you produce additional photons and you also can produce additional photons through Bremstrahlung so you have processes that exchange energies between the different components you have processes that allow you to change the number density of photons so you know that you can bring them into thermal equilibrium and you expect a spectrum at least in the very early universe when all these processes are active that is the black body spectrum so just the 8 pi nu squared nu over the e to the energy over kT minus 1 and the question then is why how does it remain a black body all the way to the present so this is something I'll try to discuss in the next few slides so at some point I mean it's not completely obvious that if you have a black body at early times that it would be a black body at late times just because the radiation eventually will no longer be in thermal equilibrium with the matter so the rate of the interactions goes down and eventually the spectrum might be distorted so you should ask do we or why do we I mean we know that we do expect a black body because we've already measured it but at what level should we expect to be to see departures from a black body and for now I'll live in an ideal universe where I'm assuming that all the photons last scatter at the same time so we'll see that that doesn't really matter but for now let's for simplicity assume that all photons last scatter at the same time and let's assume and these are the things that we'll check and make sure that they're all satisfied but let's assume that we have a black body spectrum or close to black body spectrum until last scattering and we'll also ignore processes that inject photons and then we'll go and get rid of one of these assumptions at a time and we'll see what the expectations are for departures from the black body spectrum and we'll see that they're very small but they may be detectable sometime in the future so the first question is how so really again imagine we have these hot photons and they all last scatter at the same time after that they're just free streaming so how does the expansion affect the spectrum and here maybe I'll briefly write some things on the board but it's very simple so you can probably also do it in your head so the number density of photons that you expect with frequency between new and new plus d new is so this is at time t now the question is how is it related to the number density at the time when the photons last scatter and obviously a photon that last scattered here at the time had a higher energy than it does at the later time so it's just redshifted so here it's blueshifted so there's a of t over a at the time of last scattering and then also the same for the frequency interval so you have a of t over a of t last scattering and then this is so this number density should be redshifted in addition from the expansion of the universe so you expect that to be a redshift factor which I also wrote up there so there should be redshifted tl over a of t cubed and then this here we know what it is it's the black body spectrum because by assumption we had a black body spectrum until last scattering so we have a l over a cubed and then here we have the eight pi and then new squared a of t over a l squared over e to the h new over k t last scattering we have the a over a l minus one and then we have another power from here a of t over a last scattering d new and then you see that these all cancel and this just becomes eight pi new squared over e to the h new over k t at the time t minus one where this temperature is just redshifted with respect to the temperature at last scattering by one power of the scale factor so you do preserve the expansion of the universe preserves the black body spectrum in other words this wouldn't be true if you had massive particles but it's true for any any massless quanta you just redshift the temperature now this you can easily convince yourself this is only a function of the energies so this remains true if you have processes that modify the momenta so you're still allowed to scatter but you're no longer changing the energies appreciably so if you only change the momenta redistribute the directions but no longer change the energies appreciably this conclusion remains true so what we'll have to check eventually is that at the time the last scattering occurs we don't have processes where we change the energies dramatically we'll only redistribute but we'll see that this is actually true so to do this we'll have to understand when last scattering actually occurs and to understand what the processes are that are active at the time and then photons will scatter efficiently this is the same estimate we did before and you've probably seen many times in the workshop so you always want to estimate the rate of scattering of the photon and ask is it larger or smaller than Hubble so do they scatter efficiently or not so they scatter efficiently as long as the rate at which a photon scatters is large compared to the expansion rate of the universe and so if there's no recombination let's assume for a second that we don't recombine we just have a plasma around it all times you just estimate this rate then you would conclude by plugging in the number density of electrons which by charge conservation has to be of order the number density of the baryons which you can write as the energy density in the baryons divided by the mass of the proton redshifted so you can plug this in and estimate using what we know about the expansion history of the universe and you would find that this happens around 100 Kelvin so now the question is does this happen or does the universe recombine first and we know that it does recombine first but let's see how you estimate this I saw that you already had that in some other in lectures so here this is just from thermal equilibrium you can compute the ratio of the number densities of hydrogen in the 1s state divided by the number density in electrons number density in protons and from your statistical physics class you know this is of this form where b is the binding energy of the 1s state and the universe has to be neutral so we can set the number densities of electrons equal to the number density of protons at least after the helium recombination and we can rewrite this equation in terms of the free electron fraction and we get the Saha equation I'm writing it in this form sometimes it's written in a slightly different form we can discuss but this is the Saha equation which tells you in thermal equilibrium what is the number density the free electron fraction as a function of temperature and you can just plot this you can see that everything about this is known I mean you know that this redshift is like 1 over a cubed you also know that it's the energy density in baryons divided by the mass of the proton the energy density in baryons you know from the measurements we have so you can just plug in the omega bh squared and so on the helium mass fraction you can plug everything in and then just plot this as a function of temperature and see that you expect if you were to recombine in thermal equilibrium this is what you expect so you expect the universe to hydrogen to recombine or to form for the first time at a temperature between 3,000 and 4,000 Kelvin now this isn't quite the right way of going about it because it doesn't really happen in thermal equilibrium for a number of reasons first, you emit photons when you combine and the photons that you emit in the process they readily ionize other hydrogen atoms again or similarly if you emit photons in a transition from a highly excited state to some low lying state they don't really excite other hydrogen atoms and so they don't really escape from the medium and you're not, there's no net recombination and similarly the Lyman alpha photons that you have from the 2p to 1s transition also don't escape they reionize other hydrogen atoms so this all delays the recombination and eventually the 2 photon transition from 2s to 1s becomes irrelevant this simple three level atom that I'm describing here was studied independently by Peebles and Sildovic, Kurtz and Sonjaev and the equation that you get if you take these things into account is an equation for the free electron fraction that looks like this and again here now you know everything in principle and you can plot it and you see that indeed if you include all the processes well all the processes is too simplistic so this is what's usually called is Peebles recombination and this is not usual and not really precise enough anymore for the measurements that are done now so now there's more levels included in the computations but this is the basic picture so you have a delayed recombination because recombination doesn't occur in thermal equilibrium but it still happens at around 3000 Kelvin which is much higher than the 100 Kelvin we estimated so certainly the universe becomes neutral before the recent photons no longer effectively scatter of material is not just the expansion but it's the fact that the universe becomes neutral and what you can do is you can convert this recombination history into a plot of the probability when photons last scatter so there's some probability distribution for the photon to last scatter and you see that it peaks if you do the calculation you see it peaks at a temperature around 3000 Kelvin so a typical photon that we see today will last scatter at a temperature of around 3000 Kelvin okay so at 3000 Kelvin as we said so if you have the question now is are there still processes that change the energy so is it elastic so you want to understand at this time do you exchange energy or do you just redistribute the momentum and it turns out so as we said the rate of scattering was given or the rate of scattering of a photon often of the electrons was n times the cross section times the speed of light which I kept in the slides and then this is the rate at which they scatter it's not at the rate the typical energies that you're exchanging in these processes are kT squared over m so you're trying to understand at what rate you're exchanging energies of order kT and this is suppressed this is down by one factor of kT over m so in a typical process you exchange kT over m squared eventually you would like to exchange energies of order kT for it to make an appreciable difference and so this actually is below the Hubble rate for temperatures below about 10 to the 5 Kelvin that makes sense so in other words the Thomson scattering only modifies the spectrum of temperatures above 10 to the 5 Kelvin temperatures below 10 to the 5 Kelvin everything we said here applies because you're really only changing the direction of the photons but not the energies appreciably so even though you clearly see that the photons don't all last scatter at the same time so there's clearly a distribution there's some probability for them to last scatter at 3500 degrees and there's some probability for some of them to last scatter at 2000 degrees it doesn't matter because you're not changing in the scattering but it's still going on at the time you're not changing the energies of these photons it's elastic scattering it's Thomson scattering that's going on so as I also said we're ignoring processes that inject photons into the plasma and this is not entirely true obviously so for example if you have recombination of helium before you have the recombination of the hydrogen then you have some line emission and these photons are injected at temperatures low enough so you don't redistribute the energies as we just discussed and so there's some spectrum of some modification to the black body spectrum for example that you expect from recombination lines this is something that I'm showing here this is from a paper by Jens Schluba Yasin Ali Haimut and it's from last year so this is something I'll say a few words about it in a second but this is something that's been very active also because there are some hopes that you might see this either with a future satellite or there's also currently some groups trying to see this from the ground I mean just try to measure at some frequency and try to measure the spectral distortions from helium recombination now you might ask why should you care the reason you might care is that if you can measure it you have a completely independent measurement of the helium abundance in the universe it's not something that you extract from stars it's a very clean measurement so this is one of the things you could do with it so it's mostly something that confirms our picture but it's really an independent measurement for example of the helium fraction now above 10 to the 5 Kelvin this is in the opposite regime now so you do exchange energy appreciably and if you go to even higher temperatures at some point you have the double Compton scattering process that I was sketching earlier it becomes efficient too or if you go to temperatures below around 6 times 10 to the 6 it becomes inefficient so these were the processes where you emit so you have some process where you emit an additional photon so you have your electron and you emit an additional photon so this is when you start to also be able to change the number density of the number of the photons at temperatures below 10 to the 6 they freeze out so you no longer have a black body spectrum you only have a black body spectrum necessarily at temperatures above 10 to the 6 Kelvin at temperatures below 10 to the 6 but above 10 to the 5 is a period where you no longer change the number of photons but you do change the energies so you can generate not the spectrum not the black body spectrum but a spectrum that has something like a chemical potential so you can have e to the h nu over k t and then depending on what conventions you use it's plus or minus mu and often the k t is factored into this thing so you get a spectrum that looks like this with a chemical potential and this is why it's called the mu era in between and then after this time below the 10 to the 5 you are also no longer capable of redistributing energies if you inject something into the plasma and you directly see these distortions and this is typically called the y era so this is when you're generating some Compton y parameter and then I'm showing you what these various spectral distortions look like so there's a number of processes that can go on so the silk damping is the most traditional maybe so this is something that I'll talk about at some later point in a little bit more detail so you have a plasma of baryons and photons at some point I mean they're tightly coupled but as you go to smaller and smaller scales eventually there's diffusion so there's photons just diffuse over some length scale this erases takes out power from the smallest scales and injects some of the energy density from the small scales into the large scales and this shows up as spectral distortion and the CMB this is what the blue line here is showing and then there's some other things so here are the recombination lines from helium that I was already showing and then in principle you can also try to probe less departures from our standard Big Bang cosmology from it you might look for decaying particles that decay at temperatures below let's say 10 to the 5 degrees Celsius or you have additional contributions which are in this case if you might want to think about it as a foreground because we can't really model it too well it's from reionization and structure formation but the good thing is that there's information in the spectrum so you can to some extent disentangle the various contributions and here there's some experimental sensitivities that are somewhat optimistic in this case I mean the experiment that's on here isn't really funded but so the let's remember I guess the magnitude here so the spectral distortions are of order 10 to the minus 20 25 let's say just to give you a ballpark and if you don't remember the normalization of the black body spectrum I'm showing it here again so you see that these distortions are at a very low level so we have fairly good balance this is the measurement from I mean not the measurement but the theory but it looks essentially the same as the measurement would so we have a very good measurement from from FIRAAS there are some small there are some constraints at the 10 to the minus 3 10 to the minus 4 level at departures some spectral distortions and if we want to see something we have to go even lower so we don't really expect the CMV to be a perfect black body but a very good precision it should be a black body and then there's in principle departures from it that one can look for and hopefully we'll be able to look for them sometime in the near future so even though these departures are so small in principle the hope is that we can actually measure them so there's proposal that is called Pixie this is a proposal in the US there will be it will be proposed again in well the call should be in September and then it's a call for a mid-ex mission which is comparable to what WMAP was it should be proposed by December and then hopefully sometime next year we would have an idea if something like Pixie would fly so Pixie will talk more about other experiments but effectively so this is a Fourier transform spectrometer it's the same instrument as FIrass essentially not well built also by the same people and you can make it much more sensitive than FIrass just by using today's technology there's better black bodies than there were at the time and there was a proposal that wasn't selected there was much more ambitious than Pixie it's prism I don't know what the current status is from spectral distortions in Europe I don't know Enrico is shaking his head so I mean this certainly wasn't selected there's proposals for additional future satellite missions in Europe typically the time skill for them is relatively late something like 2034 2035 for Pixie what's nice is that if it actually is selected so it was proposed before to be completely fair so this was proposed before it wasn't selected but it wasn't turned down because somehow it was felt that the technology wasn't ready it just lost to other fields and so the hope is that re-proposing it now actually it has good chances if it were selected next year it should be able to fly by around 2023 and then you wouldn't get down typically to the certainly not to the recombination lines and so on but it would give you three or four orders better constraints than the bounds from Fira so so far I've only talked about the monopole and there's interesting things to do with the monopole tomorrow we'll finally look at the tomorrow we'll finally look at the perturbations which is where there's been a lot more activity so here as you can tell this is the first experiment to look for these signatures since Firas so it's been a long time it's been 25 years since they've last been measured so it would be a good time I think to look at this again for the perturbations we'll see there's many more experiments and much more activity but hopefully also this field will eventually see some activity again so I'm a little bit early but maybe if there's questions about any of the things I said I was probably a bit too slow, too fast so if you have questions about them maybe just ask me and then, yeah do we have constraints on particles that annihilate or decay during recombination? Yes definitely I was trying to show this is something you can look for for example so here one of the lines here was decaying particles around that time so this is something that definitely leads to distortions in the spectrum of the cosmic microwave background in principle there's also constraints from the annihilation of the cosmic of dark matter particles let's say well some species of particles typically the constraints on those are stronger from the angular power spectra rather than from the spectral distortions but there are definitely constraints on the particles that decay around the time and also particles that annihilate around the time so definitely there are constraints both from the spectrum and from the perturbations yeah so the idea is that you have some functional dependence yeah sorry I'll repeat the question so the question was given that there's a number of processes that could in principle introduce spectral distortions how would you tell which is the one you saw and one of the things I briefly mentioned so certainly the mu distortions have a characteristic shape so it would just correspond to a chemical potential so this is something you can see in a relatively clean way and then depending a little bit on so there's some set of distortions that people call wide distortions it's roughly the shape that's shown here but there's depending on exactly how the process happens the functional dependence of the distortion on frequency varies I mean it does know about what the underlying process is and if it's completely wide distortion then you're just dominated by foregrounds or effects from realisation and structure formation but if it's not wide distortion if it's something that people call intermediate distortion then in principle you can distinguish between the different processes so they have characteristic shapes that depend on the process so in principle there's some hope to disentangle them just from the frequency dependence yeah so people in the early universe certainly you have a lot of other particles around I was talking about a period where you are roughly below 10 to the 9 Kelvin so I was always assuming that I'm at temperatures low enough so 10 to the 10 Kelvin is something like an MEV so I'm always after nucleosin in what I'm talking about here after electrons and positrons at low temperatures you don't really have the muons around anymore or the pions but in the earlier universe certainly if you go to earlier times you also certainly think that there should have been a core gluon plasma so there definitely should have been all these states around and in principle also if you have particles beyond the standard model if you go to even higher temperatures they should have been around so here I've always been talking about for the most part I've been talking about low temperatures below 10 to the 9 where electrons and positrons already annihilated and you really only have the electrons and protons and helium nuclei in the plasma but at earlier times certainly you do have additional particles in the plasma you said and people who gave the talk before you also said that the universe is from our observations we see that the universe is almost flat my question is given that our observations are bound by horizon would it not be proper to say that it's flat inside the horizon that we see and beyond the horizon there's a possibility it may not be flat so I think whenever we quote these things we're only talking about our observable universe so certainly anything that's further out and that we have no access to we haven't seen anything beyond the last scattering surface so we don't know I mean I think all the statements we're making typically is about the universe we observe so then you might ask in some bigger theory can you predict whether patches should be open closed if there's some landscape of these things it's something one can think about and how it depends on the model so certainly in inflation the idea is that inflation is really a dynamical mechanism that drives you toward a flat especially flat universe and you do have small fluctuations but at a level that you can typically compute to be let's say 10 to the minus 4 or something like this so there are departures from the flatness from just fluctuations depending on the model you can compute them and then they could be large or small but the measurements certainly only refer to the part of the universe that we've actually seen thank you so this figure most of the sources have these two reasons why is that so so this is maybe a little bit misleading so this is a log plot as you can see from the units it doesn't say log here but it's really a log plot and so the distortion looks if you don't put it on a log plot if you plot delta i as a function of frequency that's probably too small right is it okay so this is the spectral distortion and then as a function of frequency and it will typically look something like this so there's a positive part and a negative part and you're plotting the absolute value here so that's why it's looking the way it is if not we can thank Raphael