 So good morning. First of all, just an announcement, my lecture slides from yesterday are now all online on the course websites. And today's and tomorrow's lectures will be posted as soon as I get them into PDF format. So if there's any slide that you want to refer back to, you can just download it online and not have to scroll through the entire YouTube video of my talks. As a reminder of where I left off yesterday, we were talking about using supernovae as standard candles to attempt to put constraints on omega and matter and omega and lambda in an assumed lambda plus matter cosmology. As you see, it puts some interesting constraints on the error ellipse that you derive from the fluxes and red shifts of the type 1A supernovae puts you squarely into the accelerating region of the omega matter, omega lambda plane. But as you see, the ellipse straddles the kappa equals 0 line, the one that goes diagonally from corner to corner. So the supernova data in themselves don't tell you whether the universe is positively curved or negatively curved. We're going to need another source of information that contains information about omega matter and omega lambda in some other combination. And that brings us to not standard candles, objects of known luminosity, but standard yardsticks, sometimes called standard rulers, sometimes called standard meter sticks. These are simply objects whose physical size you know. That is, you know the proper distance from one edge to the other of one of these objects. So here's the observer looking at, in this case, not as to God, ruler. It's at a co-moving coordinate distance r, and it's a line so that the angular distance between its ends, as seen by the observer, has some value delta theta. I'm calling it delta theta because it's probably going to be a small angle, because we're going to be looking at things that are far away. So this standard yardstick emits lights. Otherwise, we wouldn't see it. And we're assuming that you can measure its redshift. Maybe it has absorption or emission lines. So the redshift z is measurable. And you're assuming that it's also resolved an angle so that you can measure this angle delta theta from one edge of the standard yardstick to the other. Well, if you know the physical size of the object from one edge to the other, you can compute a function that's cosmologists call the angular diameter distance, because you derive it from the angular diameter. It's just defined as the physical size l of the object in megaparsecs or meters, whatever your favorite units are, divided by the angular diameter of the object, delta theta measured in radians. This function is defined in this way because, of course, in the small angle limits, the angular diameter distance d sub a is equal to the proper distance d sub p in the limit that space is Euclidean and static. So just as the function that we call the luminosity distance is equal to the proper distance in the limit that space is static and Euclidean, this function will tell you the proper distance if space is Euclidean and static. If space is not Euclidean and static, well, if it's space that's described by Robertson-Walker metric, you can use the Robertson-Walker metric to find the distance d sub s between the ends of the yardstick at some fixed time t. And let's call it t sub e, the time that the light we observe from that glowing standard yardstick was emitted. Well, from the Robertson-Walker metric, it's the scale factor at the time the light was emitted, this function s sub kappa, which is equal to the co-moving coordinate distance r if space is Euclidean, smaller than r if space is positively curved, greater than r if space is negatively curved. And you multiply that by delta theta, and that tells you the distance between the ends of the yardsticks at the time the light was emitted. But of course, by hypothesis, this is a standard yardstick and we know that distance l. So we can now compute what the angular diameter distance is for an object in space described by Robertson-Walker metric. It's just the scale factor at the time the light was emitted times the function s sub kappa. Since scale factor at the time of emission is 1 over 1 plus z, the observed redshift of the object, this gives you an interesting relation between the angular diameter distance that you compute for a standard yardstick and the redshift z that you measure for your standard yardstick. And of course, if space is nearly flat and we have observational evidence that we do live in a nearly Euclidean space, then the angular diameter distance is just equal to, well, in this case, the function s sub kappa is just r, the co-moving coordinate distance, which we've normalized. So that's just the proper distance today. We call the real distance, the distance you would measure if you could stop the expansion of space and stretch up perfectly tauts, measuring tape between you and the distant object. So z is greater than 1 in an expanding universe. And so the angular diameter distance that you compute for a standard yardstick in a flat space is always going to be an underestimate of the proper distance to that object. As you'll recall, the luminosity distance that you get from a standard candle is always going to be an overestimate of the proper distance. However, it's an underestimate or overestimate that you know because you know the redshift z of this object. And, well, the proper distance is something that you can compute in a space with known omega matter and omega lambda. And here the plot is the computed angular diameter distance in three Euclidean universes. The bottom one, that's where all the matter is, all the energy density isn't matter. Omega matter equals 1. The top one, all the energy density, isn't a cosmological constant. Omega lambda equals 1. And the intermediate one, that's our benchmark model with omega matter of about 0.31 and the rest in lambda. So you see the angular diameter distance starts out being linear in z, Hubble's law. But it bends over. Notice that in the case of a lambda-only universe, a decidary universe, the angular diameter distance continues to grow until it asymptotes at 1. Here we're measuring distances and units of the Hubble distance. But for a universe containing matter, all matter or partially matter, the angular diameter distance that you measure has a maximum at some intermediate redshift and then decreases again as you go to higher redshift. I have a question. He's asking about clumpy matter. If you have clumpy matter, you have gravitational lensing effects, which are interesting, but not part of this talk. If you are in a universe where everything becomes homogeneous and isotropic when you smooth on large enough scales, if you look at objects at proper distances, larger than the homogeneity scale, then it averages to this. You need a standard yardstick, an object whose physical size you know. A good standard yardstick is hard to find. People have tried using galaxies or clusters of galaxies as standard yardsticks. But there you have evolution of galaxies, evolution of clusters with cosmic time. It becomes really annoying. However, the best standard yardsticks that cosmologists have found are the hot and cold spots on the cosmic microwave background. Remember if you look out in space, in microwave frequencies, you look through the recent transparent universe, and you're basically looking at a last scattering surface when you look at the cosmic microwave background. Astronomers call it an inside out photosphere. When you look at the sun, you are looking at a relatively thin layer where the photons undergo their last scattering from free electrons. And similarly, when you look at the last scattering surface in microwaves, you're looking at a relatively thin layer where the photons of the cosmic microwave background underwent their last scattering from a free electron. Now, I'm going to make an assertion that you will have to accept for about 20 or 30 minutes. I'm going to assert that the typical CMB photon last scattered from a free electron when the temperature of the universe was a little under 3,000 Kelvin around 2,970, and I will, in fact, justify that temperature later on. That's the temperature at which the number density of free electrons drop sufficiently low. That's the probability of a photon scattering from it became less than 1. So this is the result of, as we'll see a little later on, a very well understood physics. It's an equilibrium situation. All the calculations are simple. So this is a temperature that we know quite well. We also know very well the temperature of the CMB today, 2.7255 Kelvin, a number that I am probably muttering in my sleep by this point. And so we can now compute the red shift of the last scattering surface, 1 plus Z sub Ls, the red shift of the last scattering surface, just the inverse ratio of the scale factors, which is just the ratio of the temperatures, the higher temperature back then divided by the lower temperature right now. And it turns out to be 1,090. So although the spectrum of the cosmic microwave background doesn't have those convenient absorption and emission lines, from which we measure the red shift of galaxies, because it involves very well understood physics of recombination, we can compute the last scattering red shift very well. And in fact, the Planck 2015 results tell us that the red shift is 1,089.90 plus or minus 0.23. So we know the red shift part of this equation very well. Now we just need to know the angular diameter distance. It would be convenient if the anisotropy map of the cosmic microwave background were imprinted with nice circular, well-defined polka dots, all of easily measured angular scale. But as you see here, it's actually a random Gaussian field. That's OK because we can break down the temperature fluctuations delta t in spherical harmonics. And for each spherical moment L, we can find the correlation function of the temperature fluctuations. And you saw this or a version of this yesterday a couple of times in the dark matter lectures. Remember, low angular moment L means large angles. And as you go left to right, you're going to smaller and smaller angles. So the correlation function does have a very well-defined peak. So there is a preferred angular size in this problem, L of about 220. This highest peak, the first peak in the correlation function, has delta theta of about 0.8 degrees, or expressed in radians 0.014. So it is, in fact, a small angle. And probably, especially from the back of the room, you might not be able to persuade yourself that there's a preferred scale such that there are, let's see, about 400 dots from side to side. It's there in the correlation function. So the temperature fluctuations, those hot and cold spots on the cosmic microwave background, provide a very useful statistical yardstick. You have to do the statistics, complete the correlation function, but there's that very clear peak at an angular size of 0.8 degrees. So we can measure delta theta from the cosmic microwave background to compute the angular diameter distance. We need to know what physical scale that peak in the temperature correlation function corresponds to. Well, fortunately, as I said before, the physics of last scattering and the production of the cosmic microwave background is pretty well understood. And that first peak in the correlation function results from the presence of standing acoustic waves in the photon baryon fluid that filled the universe before recombination. And I've put up this cartoon version of recombination. To remind you that, on the left, before electrons and protons combined to form neutral hydrogen atom, the photons and the free electrons frequently interacted, colliding through the Thompson cross-section of the free electron. So the photons and electrons were coupled together. And the electrons were coupled to the free protons by the electric attraction between them. And so photons, electrons, baryons, in this case, the protons, are all coupled together into a single fluid, the baryon photon fluid it's called. And this fluid, like any sort of gas or liquid, supports sound waves. So the baryon photon fluid is affected by gravity. It falls towards the minima in the gravitational potential provided by the dark matter. And as the photon baryon gas falls inward, it becomes compressed to higher pressures. It becomes over compressed and bounces back. And then it starts to oscillate in and out. And of course, I have to move my arms back and forth while discussing baryon acoustic oscillations because it's just something that you do. So you have these standing acoustic waves. And well, what is this going to mean for the cosmic microwave background? When the universe becomes transparent, those regions where the compression was at a maximum are going to have a higher energy density, and thus a higher temperature, as seen in the cosmic microwave background today. And those regions where the photon baryon fluid was at maximum refaction, those are low energy density, and will be seen as the cool spots in the cosmic microwave background today. So the first peak in the temperature correlation function represents those regions where the photon baryon fluid had just enough time to reach maximum compression by the time of last scattering and not bounce back at all. So that's where you have maximum compression and the highest temperature fluctuations. And well, that's to go into the details. But if there's going to be a physical size L associated with these standing acoustic waves, well, the ones that have just had time to reach maximum compression will have been compressed over a distance comparable to the sound horizon distance at the moment of last scattering, just as the particle horizon distance is the maximum distance that a relativistic particle, like a photon, can have traveled by a time t. The sound horizon distance, by analogy, obviously, it's the maximum distance that a sound wave can have traveled. So it's the same formula that you use to find the particle horizon distance. Only you're substituting in the sound speed, c sub s, s for sound, instead of the speed of light. And since the sound speed can be a function of time, you put it inside the interval. Now, in theory, the sound speed is a function of time. In practice, by the time of last scattering, the energy density of photons is still higher than the energy density of the baryons. It's less than the energy density of the dark matter, but baryons, of course, provide only about a fifth or a sixth of the total rest energy of the matter. And so at the time of last scattering, photons are still dominant over the baryons. And we can approximate the sound speed of the photon baryon gas as being the sound speed in a pure photon gas. The sound speed slows down a little bit as you get closer to last scattering because of the contamination of those massive baryons. But it's a reasonable approximation. And when you do this calculation of the sound horizon distance at the time of last scattering, it comes to about 0.145 megaparsecs. So this is a result that has some dependence on omega matter because that enters into the scale factor that you're integrating over. But if you do the more accurate calculation, you get a quite accurate physical distance L for the physical size of that peak in the correlation function for the cosmic microwave background. So easily measurable delta theta, 0.8 degrees, a length for your standard yardstick that you can calculate from relatively simple physics. It's all about electrons scattering from photons and nice everyday simple physics by comparison to some things we'll be talking about over the next couple of weeks. So standard yardstick, you can compute what the physical size must be associated with the first peak of the CMB correlation function. You can measure the angular size, 0.014 radians, and you do the division. And it conveniently comes out to about 10 megaparsecs, which of course is a shockingly small distance for something at such a huge redshift. But remember, this isn't the distance today. It's the proper distance divided by 1 plus z, assuming space is nearly flat. And so it's actually the proper distance at the time of last scattering, the time that those photons underwent their last scattering. In fact, they were 10 megaparsecs away from us, proper distance. And then they just traveled along lines of constant phi and theta to reach us today. We have an angular diameter distance. We know the redshift very well. And now the problem is just to find those values of omega matter and omega lambda today that yield an angular diameter distance of 10 megaparsecs. So the very useful thing about using the hotspots of the CMB as standard yardsticks is that it turns out the angular diameter distance is quite sensitive to the curvature of the universe. So using the CMB for a standard yardstick gives you a very sensitive probe of the sum of omega lambda and omega matter today. If their sum is greater than 1, that means we're in a positively curved universe. And positively curved space acts as a magnifying lens. It makes the hotspots on the CMB look larger in angle and thus gives you a small angular diameter distance. So you make the curvature too big or rather the radius of curvature too small. And you are unable to get an angular diameter distance of 10 megaparsecs. You end up getting smaller angular diameters. Conversely, if omega lambda plus omega matter today is less than 1, that means we live in a negatively curved universe acts as a de-magnifying lens. And you get smaller angular sizes for those hot and cold spots on the CMB. This means that you calculate an angular diameter distance that's going to be too big. So the angular diameter distance versus redshift test is quite sensitive to the sum of omega matter plus omega lambda. There are other physical considerations that enter into the calculations. But the net result is this. This very elongated, very skinny ellipse is the 95% confidence interval for the CMB results using that first peak in the cosmic background's temperature correlation function as a standard yardstick. You'll notice it's not lying exactly along the kappa equals 0 plane. There are other, as I mentioned, other considerations that enter into the calculation other than purely the curvature of space. However, you'll notice the red ellipse using the first peak as a standard yardstick is nearly orthogonal to the blue ellipse using type 1A supernovae as standard candles. They're measuring different combinations of omega matter and omega lambda. This is good. And it's the combination of the supernova standard candles and the CMB temperature peak standard yardstick that leads to a spatially flat or nearly flat benchmark model with omega matter of about 0.3 and omega lambda of about 0.7. So CMB alone tells you, yeah, kind of flats. Supernova alone says, yeah, it's accelerating. It's the combination of the two that gives us a model for the universe that is both flat and accelerating. From one point of view, yay, life is good. Everything fits together neatly. In what I call a benchmark model, sometimes people refer to a concordance model or a consensus model. But I like to emphasize that it's a benchmark. It's a standard for measuring things in the universe, like distances and distance measures of different kinds. It's consistent with the various observations we have of the supernova and the cosmic microwave background. And so what's in it? My first caution to you, don't memorize all this. The exact numbers are, I'm certain going to change with the next func data release. But let's look at it as a benchmark. The photons of the cosmic microwave background we've already seen provide just a small fraction of the density today, 53.5 parts per million. The neutrinos, well, important footnote, if neutrinos were massless, then they would provide 36.5 parts per million of the energy density today. Neutrinos do have mass. The neutrino oscillations do tell us that the mass states are different from each other. And therefore, they can't all be zero. And so today, neutrinos have defected to the matter army from the radiation army. But they did so only recently. They count as hot, dark matter. And so back when the universe was radiation dominated, the neutrinos were, in fact, relativistic and were radiation back then. The baryonic matter density, 4.8% of the total energy density today. Again, later in the lecture, I'll show very briefly why we expect this number to be 4.8%. The total matter combination of CMB plus supernova says 0.31 is in agreement with observations. So 0.31 minus 0.048 means that about 26% of the energy density today is provided by dark matter, the bulk of which has to be cold rather than hot. The benchmark model, I assert this is my model, so I can make it perfectly flat. So I assert that my benchmark model is perfectly flat. Everything sums to 1 here. And so omega and lambda is 1 minus omega of everything else since radiation is such a small contributor. We can say that the cosmological constant has omega of 0.69 today. So again, don't memorize this. If you want to refer to it later, my notes are online. But this is the combination of omegas that I'm going to be using for this talk in the next one. So the cosmological principle says there are no special locations. The interest is homogeneous once you average on large scales. And the length scale I use is 100 megaparsecs, a suspiciously round number. I'm just saying with that statement that once you smooth on scales of 100 megaparsecs, the remaining density fluctuations are very small in amplitude, much less than 1. There are, however, special times. Here we have the energy density of the different components as a function of scale factor. Early in the universe, when the scale factor was small, the radiation whose density goes as scale factor to the minus 4 was dominance. Later, the matter density, which falls off as a to the minus 3, was dominance. And now where the scale factor is 1, you'll note we've only just entered the region where the cosmological constant lambda is dominant. So the radiation falls as a to the minus 4. Matter falls as a to the minus 3. So you can compute the scale factor of radiation matter equality. That's just the ratio of the radiation omega to the matter omega today. And for my benchmark model, it's around 3 times 10 to the minus 4. Well, with matter falling as a to the minus 3 and the cosmological constant, by definition being constant, you can compute the scale factor of matter lambda equality. It goes as the ratio of their omegas to the 1 third power because they differ by a factor of a to the 3 in their dependence. And once you take that cube root, you find that this is a scale factor, not all that much smaller than 1, about 0.766 for the benchmark model. Here's the energy density of different compounds as a function of scale factor with the special times of radiation matter and matter lambda equality marked out. You can, once you have models, once you have values for the radiation density, the matter density, and the lambda density today, plug them into the Friedman equation, which tells you how the scale factor varies with time. So now you integrate and you find what the scale factor is as a function of time. Notice, before radiation matter equality, you had a radiation dominated universe. Only the radiation term was significant on the right hand side of the Friedman equation. That gives you a power law dependence of scale factor on time. And it's t to the 1 half power for a radiation dominated universe. In the middle of the matter dominated era, again, you have only one term significant on the right hand side of the Friedman equation. And you get the power law t to the 2 thirds for the growth of the scale factor in the matter dominated epic. And finally, today, we're entering the lambda dominated phase where, eventually, the scale factor will grow exponentially. Notice on this plot, there's a bit of a coincidence problem. Isn't there? Now, t sub 0 is really close, at least on this logarithmic plot to the time of matter lambda equality. And there's been some discussion of, well, is this just a coincidence or is there some deep physical meaning? I don't know. Now, when the universe was dominated by radiation and by matter, the expansion was decelerating. Once lambda became significant, the expansion started speeding up. And amusingly, the period of deceleration almost exactly cancels out, the later period of acceleration. And it turns out that the age of the universe t sub 0 is really quite close to the Hubble time. About 95.5% of the Hubble time for the benchmark model. So in fact, in the very first lecture when I said age of the universe must be twiddled the Hubble time, that was a pretty good twiddle. So I promised that today I would be talking about special times in the history of the universe. We've thrown away the perfect cosmological principle. We allow certain times to be special. And by looking at our three-component benchmark model, radiation, matter, and lambda, we've identified a couple of special times, the times of radiation matter equality and of matter lambda equality. And from the Friedmann equation, we can compute the dependence of scale factor on time. And so for a given value of the Hubble time, like 14.4 giga years, we can compute for the time of radiation matter equality, both the red shift, 1 over the scale factor, minus 1, and the corresponding cosmic time. So radiation matter equality took place at a red shift of about 3440. That corresponds to a time of 50,000 years, a lot smaller than 14.4 giga years. But of course, the radiation-dominated epic of the universe, those first 50,000 years, is when the energy per particle was highest. There were a lot of interesting physics going on in the early universe in the first 50,000 years or so. Matter lambda equality, you compute the red shifts from the scale factor, and it's only a red shift of 0.3 or so. Not very large. And most modern red shift surveys of galaxies can go out to Z of 0.3 or so, no problem. And it corresponds to a cosmic time of 10.4 giga years. So if you compare that to now, where the age of the universe is 13.7 giga years, we find that matter lambda equality was about 3.3 giga years in the past. So you can judge for yourself how much of a coincidence that is. Now, I said the times of radiation matter equality and matter lambda equality are special. They're not that special. Now, so now is not particularly special. The development of intelligent life on one particular planet doesn't have any cosmological consequences. We can produce dramatic changes in the climate here on Earth, and it looks like we're proceeding to do just that. However, we still haven't screwed up Mars, for instance, much less anything 100 megaparsecs away from us. For radiation matter equality, matter lambda equality, they're not all that special. You look at the curve of scale factor versus time, and there isn't an abrupt inflection in the scale factor curve at those times, just a gradual change. If you look at galaxies at z greater than 0.31 and galaxies less than 0.31, there's no dramatic change in their properties at that particular epic. They don't say, oh my god, lambda's now dominant. I have to do something different. But it's a smooth gradual change. And so today, when I'm talking about special times in the universe, I'm going to talk about extremely special epics where you have a dramatic change in properties. The time of last scattering, when you went from having free electrons and free protons to having neutral atoms, the time of Big Bang nucleosynthesis, when you went from having free protons and free neutrons to having atomic nuclei with multiple baryons in them. And I will very briefly mention the epic of inflation early in the history of the universe. But I'll just give a quick motivation. And you'll have to wait later for the lecture series on inflation to learn about the details. OK, last scattering. I stood here and said in my most authoritative voice, last scattering took place at a temperature of 2,970 Kelvin. And so you look only moderately convinced. But we can go through some relatively simple calculations to compute the time of last scattering. And in order to do my calculations, I'm going to make one simplifying assumption that is blatantly not true. I'm going to assume that the baryonic component of the universe is all hydrogen. Last scattering occurs well after Big Bang nucleosynthesis. So there would have been. There was helium back then. However, I'm just going to concentrate on hydrogen. The math becomes a lot simpler. And it captures all the essential physics. So this is the only assumption that I have to apologize for. I'm just doing it to keep things simple. It is useful to discriminate in studying how the universe became transparent. It's useful to discriminate between recombination and last scattering. Sometimes people use them in a very casual, interchangeable manner. But their underlying physical meaning is different. The epic of recombination is that time in the universe when the fractional ionization of hydrogen fell to exactly 1 half. Fractional ionization started out as 1. All the hydrogen was completely ionized. It fell to 0. All the hydrogen was in the form of neutral atoms. So at some points in between, there was one instant when the fractional ionization x was equal to 1 half. That's equal to the number density of free protons. And so p divided by the number density of free protons and neutral hydrogen atoms, h added together. I'm assuming nothing but hydrogen. So the denominator there is the number density of baryons. And for charge neutrality, the number of free electrons per unit volume has to be the number of free protons per unit volume. Again, assuming nothing but hydrogen. The epic of last scattering, I've already defined it. It's when the typical CIB photon last scattered from a free electron. So this obviously has to happen at some point when the fractional ionization is greater than 0. There have to be free electrons for the photons to scatter from. But there's no particular reason why the last scattering has to take place at x of exactly a half. So keep this distinction in mind. And I will try to speak carefully so as not to confuse the instant of recombination with the time of last scattering. They are closely related, but they're not identical. So when did recombination occur? When did last scattering occur? They're not exactly the same question, but in order to answer the question when these two epics occurred, we need to find what the fractional ionization is as a function of time, or equivalently as a function of scale factor. And we have to find, in addition, at what value of x does last scattering occur? Does it occur at x of a half or at x of 99% or x of 1%? We don't know, but hey, we can calculate. So assumption, everything is hydrogen. I'm also going to assume with more justification this time that the fractional ionization x is determined purely by a balance between photoionization and radiative recombination. So here's the relevant equation. Hydrogen atom H plus photon of sufficient energy goes to free proton and free electron. That's photoionization. In reverse, free proton plus free electron combine to form a neutral hydrogen atom, and the excess energy is taken away by a photon. Assuming that the universe, at least its baryonic component is pure hydrogen, makes things simple, because now there's only one energy scale that's relevant to the problem. The ionization energy of hydrogen from its ground state, 13.6 electron volts. Or expressed in Kelvin, that's q divided by Boltzmann constant, about 160,000 Kelvin. I feel a little bit weak, a little bit low in energy, talking about 13.6 electron volts after that cosmic ray lecture. That talked about 10 to the 21 electron volts. But one of the great things about physics is that there's interesting physics going on at a very wide range of energies. So I'm assuming it's all about photoionization and radiative recombination. And there's no collisional ionization to worry about, because there are a whole bunch of photons for every baryon in the universe. There are about 1.6 billion photons for every baryon in the universe. That's true today, even after stars have been cranking out photons. And it would have been true at the time of recombination and of last scattering as well. So collisional ionization by bumping into another hydrogen atom, you can forget about it. It's just wildly implausible. You're more likely to be ionized by colliding with a photon than with another hydrogen atom. So the temperature scale of the problem, 158,000 Kelvin. But I just asserted in my most authoritative voice that the temperature at the time of last scattering is below 3,000 Kelvin. That's less than 2% of the temperature scale q over k that's built into the problem. So why was the temperature of last scattering so very, very low? Well, this is where we do our calculations. I'm going to make more assumptions, very strongly justified this time. Before the time of last scattering, the hydrogen atoms, the free electrons, the protons, the photons were all in a state of kinetic equilibrium, sometimes known as thermal equilibrium. Because before last scattering, all these different particles were frequently interacting with each other. They came to the same temperature T, very useful. You can use the same temperature for all these different particles. And more specifically, kinetic equilibrium says that the distribution of energy and momentum for each particle type is given by either a Fermi-Dirac distribution or a Bose-Einstein distribution, depending on the particle spin. So kinetic equilibrium can be assumed so the photons had a Planck distribution, the hydrogen atoms, electrons, protons, non-relativistic during this epoch, therefore had a Maxwell-Boltzmann distribution of particle momentum. In addition to kinetic equilibrium, we can safely assume that the equation for photo ionization and radiative recombination was in chemical equilibrium. That is, in any given volume of space, there were, on average, as many photo ionizations as radiative recombination. So the arrows were going equally well in both directions in this reaction equation. Now, obviously, this isn't absolutely perfectly true. The whole point behind there being an epoch of recombination is that there's this gradual drift from right to left. So you start out with protons and free electrons. You end up sometime later with all neutral hydrogen atoms. But this transition was sufficiently gradual that in any given instant, you can assume that the equation is in chemical equilibrium. With these assumptions, then the number density of electrons and protons and hydrogen atoms is given by the Saha equation. I was thinking of just slapping down the Saha equation and going from there. But it's important to keep in mind the assumptions that go into the Saha equation. Saha derived this Saha equation in the 1930s, if I remember correctly. And he was looking at the ionization states of stellar atmospheres. So astronomers once again provide us with interesting physics. Notice number density of hydrogen atoms divided by the product of the proton and electron density. It depends upon the temperature T and the ionization energy. Again, the only energy that's built into this problem. As T goes to 0 as the energy drops, this right-hand side becomes increasingly large. And hydrogen, neutral hydrogen atom, dominates over free protons and free electrons. So that's the Saha equation. You can justify using it due to these equilibrium conditions. And we can rewrite it in terms of that, which we really want to know, the fractional ionization x. Number density of neutral hydrogen goes as 1 minus x. Number density of free electrons goes as x. And I've taken the number density of protons off to the right-hand side. So off in the left-hand side, that which we want to know, the fractional ionization. Off on the right-hand side, the temperature, OK, the ionization energy, we know that. And number density of protons. We need to get rid of that factor and sub p. We can do that with a reflection that's the baryon to photon ratio expressed as number eta is constant. And with 1.6 photons per baryon, the inverse of that is 6 times 10 to the minus 10. Really small number. Keep that in mind. So number density of protons, fractional ionization times the total number density of baryons. We're assuming everything's hydrogen. That's eta times the number density of photons. And the number density of photons. It's a Planck distribution. So you integrate up, and it goes as t cubed. Plug that back in for the number density of protons, and you have something that looks good. Because you have that, which we're searching for, the fractional ionization on the left-hand side and on the right, something that is a function of things that we know, plus the temperature, plus eta. Now, I've asserted that eta is 6 times 10 to the minus 10. But you'll notice there's this exponential temperature dependence and only a linear dependence on eta. So if you solve this quadratic equation for the fractional ionization x, it's going to be only weakly dependent upon the baryon to photon ratio eta. And OK, you can solve it. It's a quadratic equation. You take, obviously, the positive roots. Fractional ionization is a positive number. And you get this very nice curve, d-d-d-d-d. Starting out at a fractional ionization of about 1 at a redshift of 1,600. Time goes from left to right here. You're going to lower redshift. And by the time you get to a redshift of around 1,100, the fractional ionization is very small indeed. So when is x equal to 1 half? Well, it's at a redshift of about 1,380, or a temperature of 3,760 Kelvin. And with our benchmark model going from this redshift to time, it's when the universe was a quarter of a million years old, 250,000 years. And I mentioned this result is weakly dependent on eta. Here I'm assuming the standard value eta of 6.1 times 10 to the minus 10. But changing eta only wiggles the curve back and forth a little bit. So we can now pinpoint the time of recombination. You'll notice it's a gradual transition. Going from, let's say, x of 0.9 to x of 0.1 is a sufficiently short time that people talk about the time of recombination. It's actually a process so that at any time you can assume equilibrium. But on a cosmic time scale, it doesn't take very long at all. People ask you, when recombination occurs, it's simply you can assume equilibrium. We can assume the Sahi equation holds true at any time. It starts to break down as you get to low redshifts, simply because, well, the photons start to decouple for the baryon components. And you can no longer assume equilibrium. But at x of 1, half, sure, make those equilibrium assumptions. So if we just find recombination as x equals 1, you can compute when recombination occurred. But we also want to know when the last scattering of a typical CMB photon occurred. Well, you can do this by comparing the rate of photon scattering, the number density of free electrons times the relevant cross section, the Thomson cross section for the electron, times the speed of the photons relative to the electrons, which is, of course, c. So the number density of electrons decreases with time, in part because of the dilution effect. It goes as 1 over the cube of scale factor, and in part because the fractional ionization drops as a function of time. The other rate that we have to compare this to is the rate of expansion given by the Hubble parameter. In general, you have to take into account the radiation contribution, the matter contribution, the lambda contribution. But a time of a quarter of a million years, that's well into the time of matter domination. Remember, matter took over from radiation when the universe was about 50,000 years. This is 250,000 years. So it's a good approximation that the Hubble constant only cares about the matter term. So it goes as 1 plus c to the 3 halves power. So during the matter dominated epoch, the Hubble constant drops fairly gradually as scale factor to the minus 3 halves power. But the rate of photon scattering plummets. It goes as a to the minus 3 times the fractional ionization, which is, as you've seen, dropping quite rapidly in this epoch. And so we can pin down when last scattering occurs. And in fact, it doesn't occur until a redshift of 1,090. This is when things are starting to fall out of equilibrium. So to do it right, as the Planck 2015 result does, you have to do the out of equilibrium calculations. But notice this is at a redshift smaller than the redshift of recombination. And it's when the fractional ionization of hydrogen drops to 0.007, less than 1%. So why does last scattering occur at such a low temperature? Number one, because of all of those photons per barrier, recombination is delayed until all of those photons at the higher energy end of the higher energy v-tail are doing the ionization. And even then, last scattering, because of the different parameters involved, the size of the Thomson cross-section and so forth, is delayed until the fractional ionization is much smaller than a half, when it's 0.007. So the last scattering hangs on until a redshift of 1,090 or so. I said it is 6.1 times 10 to the minus 10, a little diversion here back to the temperature correlation function of the cosmic microwave background. Earlier I mentioned the first peak, the angular size associated with that, is sensitive to curvature. And so it is. Here you're going from that greenish color to that pinkish color. You're going towards smaller and smaller omega, more and more negative curvature. And indeed, negative curvature acts as a de-magnifying lens, moves the peak to smaller angles, and hence larger multiple moments L. However, the height of the first peak is quite sensitive to the baryon to photon ratio. The more baryons you put in, that's going from the pinkish color to the greenish color here, the more, well, the more squishy the photon baryon fluid is. And so with more baryons, the photon baryon fluid is compressed to higher densities and thus to higher temperature fluctuations. So the height of that first peak, it's really a quite sensitive test of what the baryon to photon ratio is. And so here's the Planck numbers, 6.10 plus or minus 0.06 times 10 to the minus 10. You can convert that to a number density of baryons, about one baryon for every four cubic meters in the universe at present. And you can convert it into an omega for baryons. And this is where the number 0.048 for omega baryon comes from. And here the 0.048 plus or minus 0.003. Most of that uncertainty comes from the fact that we don't know the critical density all that well. Cosmic microwave background, got to love it. It's full of information about the universe at the time of last scattering. And I haven't even mentioned the polarization of the CMB yet. You'll have to get that in more specialized CMB lectures. So backward in time, let's go back to the time of Big Bang nucleosynthesis. Remember the early universe was radiation dominated? This is useful, because as we've seen in radiation dominated universes, the scale factor goes as t to the 1 half power. And so the temperature drops as t to the minus 1 half power. And that's a useful normalization. In round numbers, the temperature is 10 to the minus 10 Kelvin at a cosmic time of 1 second. That corresponds to in MEV units, 1 MEV at t of 1 second. Well, when you look at the curve of binding energy for different elements, you find out that the binding energy per nucleon on the vertical axis here is measured in units of MEV. So when you go to times much earlier than 1 second, you're going to particle energies much greater than 1 MEV. And in that early epoch, photons were energetic enough to photodissociate atomic nuclei. So at times much before 1 second, you didn't expect atomic nuclei to exist. Just as at times earlier than a quarter of a million years, you didn't expect neutral hydrogen atoms to exist. So sometime between t of 1 second and t of now, the first atomic nuclei formed. Now, today, when you're studying nuclear fusion, you don't have to worry about fusion with free neutrons, because, well, there's not a lot of free neutrons around today. Free neutrons are unstable. They spontaneously decay by emission of an electron to preserve charge neutrality and the emission of an electron neutrino to preserve the electron quantum number and the lifetime of a free neutrino, 880 seconds, about a quarter of an hour. This decay is possible because the mass of the neutrino is greater than that of a proton by about 1.29 MEV, or one part in 730 of the neutron mass. At t less than a second, obviously, the neutrons hadn't had time to decay yet. And also, at t less than 1 second, remember, kt was greater than an MEV. You had photon energies high enough to create electrons and positrons by para-production. So you've got protons. You've got neutrons. You've got positrons. You've got electrons. You've got photons. You've got neutrinos and anti-neutrinos. This means you had some interesting reactions going on. Neutrinos and protons were being converted into each other, because at t less than a second, the cross-section of the neutrinos for interactions was sufficiently high, and the number density of particles was sufficiently high. That's the neutrinos were still coupled. So neutrino plus electron. Neutron plus electron neutrino goes to proton plus electron and the reverse. And neutron plus positron goes to proton and anti-neutrino and also the reverse. And well, guess what? These equations were in chemical equilibrium. So as you can probably guess, I'm trying to build up the parallels between the epoch of recombination and the epoch of nucleosynthesis. That's kT greater than 150 MEV or so, particles, neutrons and protons broke up into quark soup. So you wouldn't have had neutrons and protons very early in the 3D universe. Once you get to temperatures below about an MEV to about 0.8 MEV, that's the temperature at which the neutrinos decouple. But in this intervening temperature range, the neutrons and protons would have been in kinetic equilibrium. It's cool enough that they are non-relativistic, so they both have a Maxwell-Boltzmann distribution of particle momenta. And if they had exactly the same mass, they'd have exactly the same distribution. But since their mass is different, you take the ratio and you find out that, oh, OK, there's a factor out in front, the masses to the 3 halves power. And there's this exponential Boltzmann term that goes as the difference in their masses. So the difference in their mass is 1.29 MEV q sub n. And their masses are small enough. The difference in masses is small enough that the factor out in front you can take is being equal to 1. So basically, you expect an exponential fall off in the neutron to proton ratio. No surprise there. When neutrons are more massive than protons, once the temperature falls well below the mass difference, you're going to strongly favor the lower mass particle. As I mentioned, the equilibrium state only remains as long as neutrinos are coupled. They're the ones that convert neutrons and protons into each other. And with a detailed calculation of the neutrino cross section for interaction with baryons, at temperature about 0.8 MEV, the neutron to proton ratio freezes out at a constant value. So the neutron number doesn't keep falling to zero. Instead, it freezes out at a value. Well, the energy difference between neutron and proton is 1.29. The freeze out temperature 0.8 MEV. And so due that exponential, the neutron to proton ratio after freeze out is about 0.2. So between the time of neutron to proton freeze out, when the universe is about a second old, up until the time of neutron decay, when the universe is about a quarter of an hour old, during that interval, you can take the neutron to proton ratio as being roughly constant, one neutron for every five protons, which has interesting consequences for Big Bang nucleosynthesis. If you are going to build up atomic nuclei out of protons and neutrons, your essential first step is fusing together a neutron and a proton to form a deuteron, hydrogen 2 or the deuterium nucleus. So proton plus neutron goes to deuteron with excess energy being taken away by a gamma ray photon. And the reaction can also go in reverse. The binding energy of the deuteron, mass of proton plus mass of neutron, minus the mass of the resulting deuteron, is 2.22 MeV. A little footnote for those who love astronomy. The sun doesn't have any free neutrinos, obviously. So it has to make deuterium the hard way. Proton plus proton goes to a diproton, helium 2. This tends to fall apart again almost immediately. And that's a decay time of about 10 to the minus 23 seconds, the light crossing time for the diproton. And you can only get out to deuterium if the diproton decays to a deuteron with the emission of a positron taking away the electric charge and an electron neutrino. The decay time for this, well, it's a beta decay. So its decay time has to be quite long, more than 100th of a second. So diprotons are forming in the sun's core all the time. But mostly they just fall apart again into a pair of protons. The alternative decay to a deuteron is extremely rare. And thus the sun has been around for 4.57 giga-years and has only managed to fuse half of its hydrogen into heavier elements. It's a wickedly ineffective, inefficient fusion reactor. In the early universe, though, things went on like a house fire. Proton plus neutron goes to deuteron plus gamma. You'll notice no neutrinos are involved. This doesn't involve the weak nuclear force at all. So it goes like a house fire. Now, proton plus neutron forms composite object plus a photon recombination. Proton plus electron forms composite object with the emission of a photon. Obvious parallels there. So it can make a rough estimate. The main difference is the difference in the energy levels, 2.2 MeV versus 13.6 Ev. And if recombination takes place at a temperature that we know around 3760 Kelvin, then deuterium synthesis has to take place at a temperature that is higher by a factor of the ratio of 2,220,000 to 13.6, or about 6 times 10 to the 8 Kelvin. Now, this corresponds to a time of about 4 and 1 half minutes. As Steven Weinberg reminds us, it's actually the first three minutes. So you are slightly underestimating with this calculation the time of nucleosynthesis. There's actually Iprimitriminute, the first three minutes. So that is the only Italian that I'm going to attempt the entire time I'm in Italy. I should stick to my native tongue. So if you do the real calculation using the nucleosynthetic equivalent of the Sahá equation, you find something a little more accurate. Again, you can find deuterium to, whoops, neutron ratio. And you can define the time of deuterium synthesis as when this ratio is equal to 1. So more accurately, it does come out to a slightly higher temperature, 7.6 times 10 to the 8 Kelvin. If you want to convert that to a redshift, you're free to. It's about 300 million. And the recombination to me, that should be nucleosynthesis, not recombination. You can see where I did a cut and paste inattentively. Recombination, no nucleosynthesis yet, at about 200 seconds or about three minutes. And just as the time of recombination is weakly dependent upon the baryontofoton ratio eta, so too the time of deuterium synthesis is also weakly dependent on eta. But deuterium is not the end of the line for Big Bang nucleosynthesis. You do make a little bit of light helium, helium-3, and tritium, example reactions there. And once you have tritium and helium-3, now tritium is unstable, but its decay time is about 18 years. So during the time of nucleosynthesis, you can treat it as effectively stable. The next steps, once again, undergo more fusions. You go to helium-4. And helium-4 is almost, but not quite, the end of the line for Big Bang nucleosynthesis. There are no stable nuclei with mass number 5, so you can't simply fuse a proton and a neutron to helium-4 and expect it to stick. If you try to fuse two helium nuclei together to get beryllium-8, again, doesn't work. Beryllium-8 is extremely unstable. You do make very small amounts of lithium, lithium-6, by fusing helium-4 with deuteron. You make small amounts of lithium-7, by fusing helium-4 with tritium. And you make small amounts of beryllium-7, by fusing helium-4 with helium-3. So here's the network of reactions that can go on. The beryllium-7, again, like tritium, it is unstable. It later decays to lithium-7 by electron capture. But during the time of nucleosynthesis, you can treat it as effectively stable. So you can go on, get traces of boron and carbon, but these are extremely, extremely tiny traces. Big Bang Nucleosynthesis, it's a race against time. The temperature is steadily dropping. And by the time the universe is about 15 minutes old, that's about it. Big Bang Nucleosynthesis is very efficient until you get helium-4. And after that, you get basically little traces of lithium left out. Now, knowing the cross-sections for all the relevant interactions in Big Bang Nucleosynthesis, you can run your Big Bang Nucleosynthesis code. And you find out that the yield of different elements depends upon the Baryon to Photon ratio. So this gives you another way of estimating eta, the Baryon to Photon ratio, the top panel. As you get fewer and fewer photons per baryon, or more and more baryons per photon, Big Bang Nucleosynthesis starts earlier. It's more efficient. And it runs to helium more and more effectively. So higher Baryon to Photon ratio means more helium. And the fact that it's more effective means less deuterium and less helium-3. Lithium-7 has this interesting dip, because in this Baryon to Photon ratio range, direct production of lithium-7 decreases with eta. And the indirect creation through Baryon-7 that later decays is an increasing function of eta. So looking at the yield of Big Bang Nucleosynthesis as a probe of eta usually focuses on deuterium. As you see, it has a relatively strong dependence on eta compared to the values of helium. It doesn't have this local minimum, the way the yield of lithium-7 does. So you want to find the primordial deuterium to hydrogen ratio, because deuterium is very easily destroyed in stars. Once a star forms from the interstellar medium, its first fusion reaction is destroying that deuterium and building up heavier elements. So we want primordial gas, gas that hasn't been cycled through stars. And so astronomers look for deuterium abundances in, jargon alert, metal-poor damped limon alpha systems. These are metal-poor. Astronomers use metal for everything other than hydrogen and helium. The chemists laugh at us, but so we do. Metal-poor means, oh, metals, heavy elements are made in stars. This is gas that has not been cycled through stars. It's a damped limon alpha system, because it's a cloud of primordial stuff. Hydrogen and helium, with a little lithium, between us and a distant background source, a quasar. And so if you look at the Lyman series, you see absorption lines going from limon alpha, limon beta, gamma, delta, and so forth. Limon alpha is, in this range of frequency, completely black. That's the damped limon alpha part. But you go to the higher order of Lyman lines. You find out that there's this main line, but off on the left here. It's slightly blue-shifted. There's this little supplementary line. That's because there's a small isotopic shift in limon alpha for deuterium relative to that of ordinary hydrogen. Limon alpha is usually quoted as 122.567 nanometers. Limon alpha for deuterium is blue-shifted by one part in 3700. That's equivalent to the shift you would get from a Doppler shift of minus 80 kilometers per second. That's why this horizontal axis is labeled in units of kilometers per second. You can model these lines using different deuterium to hydrogen ratios. And so Petini and his collaborators found that the best fit is deuterium hydrogen about 2.5 times 10 to the minus 5. That's the number density of deuterium relative to that of hydrogen. And I have a question here. I've forgotten the redshift. Nope, it's not written here. It's a relatively high redshift limon alpha cloud with a still higher redshift quasar behind it. You can look up the paper Petini et al. But unfortunately, I don't remember the redshift. But they did get this quite accurate measure of the deuterium to hydrogen ratio for this system. There are other similar damp limon alpha systems as well on the lines of sites to other quasars. And again, going back to the previous slide, you can for a given deuterium to hydrogen ratio as long as you're certain it's primordial. Read off what the relevant value of eta is. And in this case, it's 6 times 10 to the minus 10 plus or minus 0.1 with, again, good agreement with the value of eta found from Big Bang nucleosynthesis. Now, I'm under the impression that I'm supposed to stop talking now. And since it is time for lunch and my voice is giving out, I did have a few comments on inflation. But I'll save them for the beginning of the next lecture tomorrow. So thank you. And please.