 Welcome back. We have the fourth lecture by Barbara Raiden about Introduction to Cosmology. Thank you. Well, since I've reached the fourth and last of my lectures, it's time to confess that the universe is not really homogeneous and isotropic. If you look at the results of redshift surveys of galaxies, like the 2DF Galaxy Redshift Survey seen here, you realize that once you have about 100,000 galaxies, rather than the trifling 41 galaxies that Arthur Eddington could list, then you have a nice picture of what's commonly called the cosmic web. Galaxies are not distributed uniformly through space. Instead, they tend to lie, as you see here, along long filamentary superclusters. And there are also more nearly spherical cosmic voids that are almost devoid of galaxies. Now, in defense of me and of Robertson and of Walker and of Friedman, once you smooth this over large enough scales, scales of 100 megaparsecs or more, then things do smooth out to be almost homogeneous on large scales. However, you do go to larger and larger scales, you still do have structures only at lower and lower amplitude. So, where does this structure come from, what's its origin, and how does it evolve with time? That's what I'm going to be discussing today. So, the first three lectures, it sort of was this triumphal progress of cosmology. Our universe can be well described by a hot big bang model, hooray. Its expansion as a function of time can be described in terms of matter and a cosmological constant and radiation during its early eras, hooray. And by making observations of standard candles and standard yardsticks, we can get reasonably good measurements for omega and matter and omega and lambda, hooray. Well, the story goes that during every Roman triumph, the triumphant general being pulled as chariots was followed by a slave who whispered in his ear, remember you are a mortal man. So, we all have flaws, we all have unsatisfactory aspects, as I put it euphemistically here. And by the year 1980, it was realized that there are some really puzzling aspects of the standard hot big bang model. And this was what prompted Alan Guth to put forward the idea of cosmic inflation. And I just want to highlight two of the problems that the standard hot big bang model faces and discuss how inflation can fix those problems. So, the two problems, the flatness problem, which is simply the statement that the universe today is pretty close to euclidean or flat, scales, moreover, if you extrapolate back into the past, it was even flatter. That is, the total omega density parameter was very, very close to one early in the history of the universe. There's also the horizon problem, the fact that points in space, that's in the standard hot big bang model, should be outside each other's particle horizon. And thus are not in causal contact with each other. Nevertheless, you know, on those large scales, the universe is nearly homogeneous, despite the fact that these two locations shouldn't have a chance to talk to each other and come into equilibrium. So, you might ask, why are these a problem? We like it that the universe is nearly flat. Euclidean geometry is easier than non-euclidean geometry. We like it that the universe is homogeneous and isotropic on large scales. That means that we can use the Friedman-Robertson-Walker metric and the Friedman equations instead of having to use the full Einstein-Field equations. So, the flatness of the universe, the homogeneity of the universe, these make the math simpler, but the universe is under no obligation to make things simple for us. There must be a reason why the universe is so very nearly flat and so very nearly homogeneous on large scales. So, first let's look at the flatness problem. We saw from combining the results from the supernovas and from looking at the angular size scale of the cosmic microwave background temperature fluctuations, we found that the universe was close to flat. The gray ellipse over here on this relatively small portion of the omega-matter, omega-lambda plane is the 95% confidence level when you combine the supernova results with the CMB results. There's also other sources of information like baryonacoustic oscillations, which I won't have time to discuss, but with these additional sources of information, you can narrow down the 95% confidence interval to that little teeny-tiny black ellipse. So, you are here metaphorically speaking. And that ellipse is very close to that dashed line, which represents kappa equals zero or omega total equals one. So, today combining all the information, we know that the deviation of omega from one today is less than 0.005. Pretty close to flat. However, the Friedman equation, got to love it, tells us that the deviation of omega from one varies with time. And in fact, the magnitude of one minus omega is the square of the ratio of two length scales, the Hubble distance, C over H, divided by the radius of curvature of the universe. So, during the radiation and matter dominated era, the Hubble distance on the top of that equation grows linearly with time. However, the scale factor down below goes as t to the one-half power during the radiation dominated era and t to the two-thirds power during the matter dominated era. So, during the radiation and matter dominated era, this term on the right hand side of the Friedman equation grows with time. So, any small deviation of omega from one will increase with time during the matter dominated and radiation dominated era. So, if we have values for omega matter and omega lambda and omega radiation, we can extrapolate backward in time. If we go back to the time of deuterium synthesis when the universe was three minutes old, the deviation of omega from one back then was one part in one quadrillion, 10 to the minus 15, very, very close approximation to one indeed. If we go back in time as far as we dare back to the Planck time, we find that omega was ridiculously close to one if we just do the extrapolation. It differed from one by two parts in 10 to the 62. If you wanted to change the sun's mass by two parts in 10 to the 62, you would have to add or subtract a twentieth, a twentieth of an electron. So, that's really insanely close to being perfectly flat and why? So, as you could toss up your hands and say, initial conditions, the universe just started out that way, but that's a little unsatisfactory. It's more satisfying to have some sort of physical mechanism for flattening out the universe early in its history. So, what about the horizon problem? Well, imagine you are there, you look outward in space using your microwave detectors and you're looking at a last scattering surface surrounding you. Now, in the standard hot big bang model, without any inflation, you can compute the particle horizon distance at the time of last scattering. It's just given by the standard formula for the particle horizon distance. The universe started out, radiation dominated for its first 50,000 years, then switched over to being matter dominated. But if you plug in the proper values for omega matter and omega radiation, you find out that the horizon distance at the time of last scattering was proportional to C times the time then, 370,000 light years, the proportionality constant out in front, that's 2.24 it works out for. Taking into account the switch from radiation domination to matter domination. So, work it all out in astronomers' favorite units, the megaparsec. The particle horizon distance at the time of last scattering was a quarter of a megaparsec. So, points more than a quarter of a megaparsec away at the time of last scattering should not have been in causal contact. There wasn't time to send a message from one point to another if they were more than a quarter of a megaparsec apart. What the implications does this have when we look at the last scattering surface? Well, remember, I'm recycling from a previous lecture here, remember the angular diameter distance to the last scattering surface? We worked it out to be in round numbers about 10 megaparsecs. So, using the definition of angular diameter distance, we can ask, you know, if you have a patch of space a quarter of a megaparsec across, and it's at an angular diameter distance of 10 megaparsecs, what angle will it subtend as seen by you, the observer? And it's just a quarter of a megaparsec divided by 10. I'm using these nice round numbers because it makes the math easier to check. So, 0.025 radians are converted to degrees a little under one and a half degrees. By this analysis in the standard hot big bang model, that is without inflation, if you look at the cosmic microwave background, points more than about 1.4 degrees apart from each other, we're outside each other's particle horizon at the time of the last scattering of the photons. They should have absolutely no knowledge of each other's existence. They can't have sent information to each other. And in particular, they can't have come to thermal equilibrium with each other. There's no reason for them to have the same temperature. But we have eyes and we have microwave antennas and microwave receivers more to the point. We can see that points on the last scattering surface more than 1.4 degrees across do have the same temperature to within one part in 100,000. How do they know what temperature that they're going to have? I mean, how can they have communicated to each other, come to thermal equilibrium, evened out any temperature fluctuations they may have had unless they were in causal contact with each other? So, again, I suppose you could invoke initial conditions. The universe started out homogeneous almost with small fluctuations of amplitude 10 to the minus 5. But, again, you know, was there possibly sometime early in the history of the universe some sort of mechanism that both made the universe more nearly Euclidean, flattened it out, drove omega towards 1. And it would be nice if you could kill two birds with one stone metaphorically and also have that same mechanism smooth out the universe, make it more homogeneous. So, the concept of inflation. During the very early universe, there was a temporary era when the expansion was speeding up. We know that the expansion is speeding up today, but the idea of inflation was back in the early universe, there was another temporary short-lived era when you also had accelerated expansion. So, to see how inflation solves the flatness and horizon problem, I'm going to use a very simple toy model. I'm just going to say a period of exponential expansion. So, we're going to say there was a temporary cosmological constant, lambda sub i, let's call it. Exponential expansion began at some time t sub i early in the history of the universe. You might ask how early? Well, many implementations of inflation say that it began at or around the gut time when the mean particle energy of the universe dropped below the gut or grand unified theory energy of 25 electron volts. So, this occurred when the universe was about 10 to the minus 36 seconds old. So, this is a very popular choice for when inflation began, but it's not the only choice. The Hubble constants describing the exponential expansion. Again, that's the square root of the cosmological constant divided by 3. That would have been about equal to 1 over the time at which inflation began. And remember, inflation says this exponential expansion started. It also says it stopped at some time. Let's call it t sub f, f for final. When the exponential expansion stops, the energy density associated with the cosmological constant, lambda sub i, is well, through the process of reheating, it's transferred to highly relativistic particles. For instance, perhaps it was converted to photons, and these high energy photons would have produced all other types of relativistic particles by their production. Now, I emphasize that this is in fact a toy model. Don't take it seriously. It's highly physics free. There will be a series of inflation lectures next week that will discuss some of the possible physical mechanisms for inflation that have been proposed. At least I'm assuming that's what the inflationary lectures will cover. If they don't, then you should demand your money back because it's a very interesting problem. But one too complicated for me to cover in just one lecture. So, okay, if you have, by some mechanism or other, this period when the universe expands exponentially, then how does that solve the flatness problem? Well, we go back to the Friedman equation. We always end up going back to the Friedman equation. Remember, it says that the deviation of omega from one goes as the square of the ratio of the Hubble distance to the radius of curvature of the universe. During exponential inflation, the Hubble distance is constant. It's just the speed of light divided by the really honest to God constant Hubble constant during that era. Thoughts, the scale factor during exponential inflation grows exponentially. So, the deviations of omega from one during exponential inflation decrease exponentially. So, with a relatively small number of e-foldings of inflation, you can flatten out the universe just like the proverbial pancake. So, how many e-foldings of inflation do you need at a minimum? Well, let's suppose for the sake of argument that right before inflation started that the universe was pretty strongly curved, that the deviation of omega from one was comparable to one. It might have been omega very close to zero. This gives you a low density negatively curved universe. It might have been omega somewhere like two that gives you a high density positively curved universe. But if one minus omega is about equal to one in magnitude, in that case, if inflation started at around the gut time, 10 to the minus 36 seconds, you can compute that to produce today's level of flatness, the deviation of omega from one being 0.005 or less, you need at least at a minimum 60 e-foldings of inflation. So, during this very brief epic from around 10 to the minus 36 seconds to 10 to the minus 35 seconds, you had growth by a factor of e to the 60, around 10 to the 24. So, very, very rapid exponential growth can solve your flatness problem. Exponential inflation can also solve the horizon problem, because during exponential inflation the particle horizon size grows exponentially. So, let's think about why this should be true. Before inflation started, the universe was radiation dominated, full of highly energetic, highly relativistic particles. And in a radiation dominated universe, as long as the curvature isn't too humongous, we can say, oh, it's approximately what you would get in a perfectly flat radiation dominated universe, or about two times c times the age of the universe. If you have significant curvature, it might deviate from this exact value, but it's always going to be proportional to c times t, the time at the beginning of inflation. So, if inflation began around 10 to the minus 36 seconds, this distance is 6 times 10 to the minus 28 meters. A very small distance indeed, but remember, if you have, right before inflation started, two particles or two points in space that are 6 times 10 to the minus 28 meters apart from each other, they have had time to send information to each other. They have had time, for instance, to come into thermal equilibrium with each other. So, these 2.6 times 10 to the minus 28 meters apart are in causal contact. And during the subsequent exponential expansion, they still remember, they still have the information that they've swapped with each other. So, these two points that were in causal contact before inflation are still in causal contact during and after inflation. So, as these two points are exponentially expanded away from each other, then the particle horizon exponentially expands as well. And for the minimum number of e-foldings of inflation to flatten the universe for n equals 60, this means that the points that were 6 times 10 to the minus 24 meters apart from each other before inflation are for n equals 60, 7 centimeters apart from each other, which is pretty impressive. You start out with points that are separated by this infinitesimal submicroscopic distance and they end up being separated by a distance so comparable to the size of this plush toy, few inches apart. Now, on cosmic scales, something the size of a tiny plush toy doesn't sound very impressive, but remember when inflation ended, it was still early in the universe, roughly T times T. T equals 10 to the minus 34 seconds. And so, the horizon size, the distance between those two points that had time to talk to each other to send information before inflation began, undergo still greater expansion from the scale factor at the time inflation ended to the scale factor at the time of last scattering, is another large factor of growth and distance. And if you work it out for the parameters of the benchmark model and inflation at 10 to the minus 36 seconds of the assumption type I've been making, that little plush toy-sized volume of space expands to about a megaparsec across, again, if you have 60 e-foldings of inflation. Remember, without inflation, I computed the horizon size at the time of last scattering as a quarter of a megaparsec. So by introducing 60 e-foldings of inflation at the gut time, I've expanded the horizon at the time of last scattering by a factor of 4, which means I've expanded the region and causal contact from about 5.4 degrees across to 5.6 degrees across, which means that I'm going to have, in order to have no antipodal points on the last scattering surface and causal contact, I'm going to have to have a little more than 60 e-foldings of inflation. However, thanks to the power of exponential growth, you put in a few more e-foldings, three or four, and that's enough to make the horizon size at last scattering long enough that every point on the last scattering surface, even the ones 180 degrees apart, will have been in causal contact prior to inflation beginning and thus had time before inflation began to come into temperature equilibrium. So, A, inflation solves the flatness problem, solves the horizon problem, which is one reason why inflation is really, really popular. All of these puzzling questions about why the universe was so flat, why the universe was so homogeneous can be answered by a minimum of 63 or 64 e-foldings of inflation. I should note that that's just the minimum the observations we have today are consistent with the universe being even flatter with that, with the difference of omega from 1 today being much, much smaller than 0.005. On the one hand, inflation by ensuring that widely separated points are in causal contact with each other by increasing the particle horizon size at the time of last scattering. Inflation prevents high amplitude temperature fluctuations. You don't have the temperature on one side differing from the temperature on the other side by orders of magnitude because they had time to come to equilibrium before inflation started. So, this is a good thing, inflation prevents high amplitude temperature fluctuations, but wait, there's more. Inflation by inflating quantum perturbations up to macroscopic sizes also causes the low amplitude temperature fluctuations that we see today, those temperature fluctuations of order 10 to the minus 5. This sort of sounds too good to be true. It's like inflation is being peddled to you by a rather unethical salesman. You want delta t over t to be less than one, sure I can do that. You want delta t over t to be exactly 10 to the minus 5, sure I can do that. Well, in fact, because inflation is so good at inflating, because it's so good at taking small scale things to large scales, it's going to enhance or amplify in length scale quantum perturbations, which ordinarily would be too tiny to be of interest as far as structure formation is concerned. Brief sketch, if you have a scalar field called generically the inflaton field that drives inflation, this field will have quantum perturbations, like all fields. The quantum perturbations in the inflaton field will result in different regions of space having slightly different numbers of e-foldings of inflation. This means that different regions of space with different values of the inflaton field will end inflation and enter the reheating phase at slightly different times. One patch will end inflation early, it will reheat to become radiation dominated and thus start cooling, while another patch is still in its inflationary phase and isn't cooling down yet. So quantum fluctuations will result in slightly different lengths of time for inflation in different eras, and this will result in slightly different temperatures and thus slightly different energy densities in different locations once the whole inflation and reheating episode is over. So again that was the very brief hand-waving sketch of how quantum fluctuations can give rise to density fluctuations in the universe. So the energy density fluctuations present after reheating, these are shared among all the highly energetic, highly relativistic particles that were there back then. The dark matter particles, if they're wimps, weakly interacting particles, they will decouple from the other particles in the universe about the same time that neutrinos, also weakly interacting particles, decouple from other particles. And the universe is about one second old. When the wimps decouple may have the density fluctuations that were stamped upon the universe at the end of inflation, and since the density fluctuations will initially be very low in amplitude, it's useful to expand the mass density row of the dark matter in terms of, well, row bar. That's a spatially average density at time t, average over a large volume, sufficiently large, that you can approximate the universe as homogeneous on those scales. And then at any given points, the density won't usually be exactly equal to row bar, it'll be the mean density times one plus delta, the dimensionless density fluctuation. And saying that the amplitude of density fluctuations is initially low is just saying that delta has an amplitude very much smaller than one. There are certain predictions you can make about the properties of those density fluctuations delta if they arose during an inflationary phase. First of all, you expect that the density field delta will be a Gaussian random field. Now, if you haven't played around a lot with Gaussian random fields, you might expect, oh, that means that the probability distribution of delta, if you just pick random positions in space at a given time, will have a Gaussian distribution with some standard deviation sigma. Notice we've defined delta so that its mean is zero, and this is true. If you do pick random points in a Gaussian random field, you will find that the values of delta that you pick out will have a Gaussian distribution. However, that's not the only property that Gaussian random fields have. To find out more interesting problems of Gaussian random fields, I'm going to have to go into Fourier space. I know you've been dying to do Fourier transforms, haven't you? It's all part of being a physicist. Okay, let us take a big volume of the universe, so it's our co-moving volume again. It's expanding along with the homogeneous and isotropic scale factor, A of t. It's a box big enough that it contains all the wavelengths of interest for our Fourier decomposition, and so we just find what the Fourier components are. It's the usual Fourier transform, so instead of delta as a function of co-moving distance r, now we have delta as a function of the co-moving wave number k. Remember, r is a co-moving wave number. r is a co-moving distance. k, the co-moving wave number, is just 2 pi over a co-moving distance. Since the Fourier components are all complex numbers, you can write them down as an amplitude times e to the i times phase phi sub k. In general, different for each wave number k. And so here's the additional bit of information. If delta is a Gaussian random field, then part of the definition of a Gaussian random field is that its phases, phi sub k for the different wave numbers, are uncorrelated with each other. In a Gaussian random field, if you know the phase of one particular Fourier component, you know nothing at all about the phases of the other Fourier components. This means that in a statistical sense, the phases phi sub k don't contain any interesting information. You can do different realizations of the same power spectrum, p of k, and yeah, they'll look different, but they'll have the same statistical properties, the same correlation function, and the same global properties of large scale structure. So we expect the density field delta that comes out of inflation to be a Gaussian random field, and therefore everything we need to know about it, statistically speaking, is given by its power spectrum, p of k. And here I'm implicitly assuming that the density field is isotropic. So this average here is an average over all orientations of wave numbers of a given length. So, ah, if we want to know how those initially tiny perturbations to the dark matter density grow with time, we're going to want to know what the power spectrum is for density fluctuations. Well, there's another prediction that comes out of inflationary theory. The perturbations that arise from exponential inflation should have a power spectrum that's a power law. The power on a scale 2 pi over k should go as k to some power n. If you have many e-foldings of inflation, then you have a wide range of length scales where you don't expect any distinctive feature in the power spectrum. So a power law seems natural to arise out of exponential inflation. And in particular, the prediction from inflationary theory is generally that the index n should be close to 1. If n equals 1 exactly, this is what's called a Harrison-Zeldovich spectrum, because Harrison and Zeldovich around the year 1970, no, a decade before the inflationary idea was put forward by Alan Goothe, they proposed that n equals 1 was the most natural power spectrum to have. So, okay, let's go with it. Suppose that the power spectrum goes more or less linearly with the wave number k. What does this imply for, oh, I don't know, observable consequences? You can tell I'm an astronomer, not just because of this shirt, but because I'm obsessed with observing things. Well, suppose you take a co-moving volume of the universe such that the expected mass within that volume is equal to m. If you have density fluctuations, the amount of mass inside that volume won't be exactly m. It'll have fluctuations such that the RMS density fluctuation, delta m over m, goes as a power of m. And it can be shown that it goes as m to the minus 3 plus n divided by 6. So, notice that n equals 0, that's what you get with a Poisson distribution. So, for n equals 0, you get delta m over m going as 1 over the square root of m. And that makes sense. And for n equals 1, you have the density fluctuations falling off as you go to bigger and bigger co-moving volumes, larger and larger average masses. It falls off as m to the minus 2 thirds. So, you have larger density fluctuations on smaller mass scales, and it's a fairly steep dependence, m to the minus 2 thirds. Also, the potential fluctuations on small scales, small enough that we can use Newtonian theory and talk about potentials, that goes as fluctuations in mass divided by radius, radius goes as m to the 1 third power. And so, potential fluctuations would go as m to the 1 minus n over 6. So, the reason why Harrison and Zeldowich recognized n equals 1 is special. It's the power law index for which you don't have potential fluctuations that diverge on large scale or small scale. You just have potential fluctuations that are constant with scale. So, this is why the Harrison and Zeldowich spectrum is also known as a scale invariant spectrum. You have equal potential fluctuations on all scales. Potential fluctuations that are equal on all scales, where have we seen that before? And in fact, we've seen that in the correlation function for the temperature fluctuations of the cosmic microwave background. When you go to small multipole moment L corresponding to large physical scales, the temperature fluctuations sort of evens out, doesn't it? There are error bars due to cosmic variance, but it's consistent with temperature fluctuations that arise from potential fluctuations in the dark matter that are invariant with scale on these quite large scales. And in fact, if you do the detailed analysis of the temperature fluctuation, the best value from Planck for n on large scales is in fact close to 1, but measurably different from 1. It's about 0.97 plus or minus 0.01. So, score 1 for inflation. The power spectrum that you see on large scales is consistent with the predictions of inflation. However, the initial spectrum, p going as k to the 0.97, is modified between when the dark matter particles decouple up until the time of radiation matter equality. And in particular, on small physical scales, you have a suppression of power during the era of radiation domination. Let me explain why. The physical size of a perturbation, you know, it's proper wavelength, let's say, since we're using Fourier transforms. Its wavelength is 2 pi over k, but that's the co-moving wavelength. To convert to physical units, you have to multiply that by the scale factor. Okay, when the physical size of the perturbation is bigger than the Hubble distance, c over the Hubble parameter, that's going to be about 2 times ct in a radiation dominated era, the amplitude grows with time. Now, when I first heard this sentence stated to me when I was young and naive, my first question was, well, why? Why does it matter whether it is bigger or smaller than the Hubble distance? Well, in fact, if you look at one Fourier component, a simple sine wave, Newtonian physics doesn't worry about how fast gravity propagates. It just makes the underlying assumption that it propagates instantaneously. However, we know gravitational waves travel at the speed of light. And so if you have this sine wave perturbation that was stamped upon the universe after inflation, then if you look at a wave crest, it's going to be out of contact, gravitationally speaking, if the distance from crest to trough is large compared to the Hubble distance. So, if this is a very long wavelength Fourier component such that the wavelength is very large compared to the Hubble distance, then a patch of the universe close to one of the crest hasn't received any gravitational information from that trough yet, and it's going to behave like a small patch of an omega greater than one universe. A homogeneous omega greater than one universe. Similarly, the trough hasn't received gravitational information from the crest, and so it's going to behave like a small patch of a homogeneous omega less than one universe. And as we know, deviations of omega from one tend to increase with time. So, if you have these super-hubble perturbations with a wavelength greater than the Hubble distance, the crests will just increase in density, the troughs will decrease in density. They just behave as if they are separately in a homogeneous omega greater than one and a homogeneous omega less than one universe. So, physical wavelength grows as the scale factor during the radiation dominated era that follows inflation. The scale factor grows as t to the one-half power. Again, in a radiation dominated era, the Hubble distance grows linearly with time. So, the Hubble distance increases more rapidly than the physical wavelength. If you start with a wavelength that's bigger than the Hubble distance, eventually the physical size lambda is going to be overtaken by the Hubble distance. This is sometimes called as referred to as coming inside the horizon, but I prefer to think of it as the Hubble distance going outside of you. And so, once you are inside the Hubble distance, once lambda is less than C over H, the amplitude of the perturbation freezes out. And we'll see a little bit later on why the amplitude of mass density perturbations in a radiation dominated era, why these perturbations don't grow with time when radiation dominates. Okay, so the power spectrum is altered during the radiation dominated era, because once the wavelength lambda comes inside the Hubble distance, or once the Hubble distance goes outside lambda, then the perturbations freeze in amplitude and you get no more growth. So, here on the left, here's the power spectrum, the units are arbitrary in this case. The dashed line, oh, it's a straight line on this log-log plot, that just represents the K going to 0.97 power law that you get coming out of inflation. The dashed line, labeled HDM, that's what you get if the dark matter is hot, if it doesn't become non-relativistic until relatively late in the history of the universe. In that case, you note that, wow, all the short wavelength, large wave number of power, is just wiped out by the free streaming of the hot dark matter particles when they were still relativistic. So, hot dark matter has this severe suppression of short wavelength power. The cold dark matter spectrum, the solid line there, has a suppression of power on small scales, but not as severe as in the hot dark matter scenario. And as you go to smaller and smaller length scales, larger and larger co-moving wave number, you get more severe suppression since these short wavelength perturbations come inside the Hubble distance earlier and thus freeze in amplitude earlier on. As you go to longer and longer wavelengths, smaller and smaller wave numbers, K, you come inside the horizon later and later and thus get smaller and smaller degrees of suppression. So, notice that the cold dark matter power spectrum has a preferred scale. The maximum occurs at the wave number corresponding to the Hubble distance at the time of radiation matter equality, but it's not strongly preferred. It's a fairly gradual turnover, so you have a lot of power on a wide range of scales if you have a cold dark matter power spectrum. To see what this means physically, it might be more useful to look at delta M over M, the fractional mass fluctuations, as a function of mass scale. So, a reminder, a big galaxy like our own has a mass of 10 to the 12 solar masses, a big cluster of galaxies like coma, about 10 to the 15, and a big fat supercluster, the biggest bound structures, maybe 10 to the 17 solar masses. That would give you a sense of scale. So, the initial spectrum, remember that's a power log, I guess about M to the minus 2 thirds, so steeply rising towards smaller scales. If you have hot dark matter, in this case I've adjusted the mass of the hot dark matter particles so that I've wiped out all fluctuations smaller than superclusters. So, hot dark matter leads to what astronomers call a top down theory of structure formation, that is the first things that collapse are sizes, things the size of superclusters which then fragments to form smaller objects, clusters and galaxies. Cold dark matter, you'll notice, has delta M over M that increases to smaller scales. And so, in a cold dark matter universe, you have things collapsing gravitationally first on the smallest scales. So, you get galaxies first which then assemble into clusters and then assemble into superclusters. This is called the bottom up theory of structure formation. So, at the time of radiation matter equality, you have this particular cold dark matter power spectrum. The density perturbations are stamped upon the dark matter and, well, what happens next? Well, during the matter dominated era, the amplitude of those fluctuations delta in the distribution of cold dark matter grow with time by gravitational instability. The idea that's usually summed up as the rich get richer, the poor get poorer. The dense regions become denser, the under dense regions become more under dense. I'm going to avert my eyes from fluctuations in the baryon density because that gets more complicated. And if it's complicated, I don't want to deal with it because baryons can interact with photons that makes their evolution more complicated. So, I don't like complications. I'm going to deal with the simplest case of gravitational instability I can think of. Very low amplitude, density fluctuations, magnitude of delta much smaller than one, in completely pressureless, completely dark matter. So, the genes length is zero, there's no pressure support on small scales, there's no interaction with photons at all except gravitationally. And keeping over a lot of math, if you apply linear perturbation theory to the acceleration equation, you say, okay, maybe the density isn't quite homogeneous, you get a differential equation that tells you how the over density or under density delta evolves with time. And this equation is generic. It applies even when the universe is not matter dominated. You just have to remember to insert the correct value of the Hubble parameter for the universe you're actually looking at. Okay, sanity check. Let's assume that the universe is static. Hubble parameter is zero, nobody's going anywhere. And the mean density, rho bar, is constant with time. You get second derivative of delta with respect to time is a positive constant number times delta. And that of course tells you exponential growth. Well, there's also exponential decay. The values of A sub one and A sub two, these are set by initial conditions. But after a few dynamical times, where the dynamical time goes as one over the square root of G times the mean density, after a few dynamical times, all you have left is the exponentially growing mode. So this is a familiar results in a static universe. Once you have small density fluctuations, they grow exponentially with the time scale being proportional to one over the square root of G times rho. But we don't live in a static universe, we live in an expanding universe. So how is the expansion of the universe, how is the introduction of this Hubble parameter turn going to affect the evolution of density fluctuations? Well, let's look at the equation again. The term involving the Hubble parameter, H of t, the Hubble expansion term, let's call it. Notice that if you have an expanding universe, this is going to drive down the density of matter by dilution. So the Hubble expansion term is going to decrease density. And the time scale is just the Hubble time that goes as one over the square root of the entire energy density of the universe. Matter, radiation, cosmological constant, whatever you've got in your universe. The equation has, on the right-hand side, a term involving self-gravity. If the region is over dense, its self-gravity is going to make it denser still, the rich get richer. And so self-gravity will increase the density of an over-dense region. Here the relevant time scale, well, it's the dynamical time, which goes as one over the square root of G times the matter density, just the matter density. So if you are in an epoch that where matter is not dominated, in the radiation-dominated era, for instance, the mass density provides only a very tiny fraction of the total energy density. And therefore the dynamical time, which goes as one over the square root of rho, will be much longer than the Hubble time, which goes as one over the square root of the density of everything. So if matter is not the dominant term, or is not the dominant component of the universe, the self-gravity term is going to be tiny compared to the Hubble expansion term. And any perturbations you have grow only extremely slowly with time. If you put in the Hubble parameter for a radiation-dominated universe, for instance, you find out that perturbations in the dark matter density grow only logarithmically with time, so really excruciatingly slow growth. So this is why density perturbations inside the Hubble distance, when the universe is radiation-dominated, can be treated as freezing out. If matter is dominance, well, in that case, the energy density of the universe is provided almost entirely by the mass density. The dynamical time and the Hubble time will be of comparable amplitude, and so expansion drives down the density, self-gravity drives up the density, they're battling with each other, they're of comparable strength, who's going to win? Well, in fact, self-gravity wins out, but you don't get exponential growth in this case, you get a power-log growth. So you do get growth and density perturbations, but one, if there are perturbations smaller than the Hubble distance, they grow only during the matter-dominated phase of the universe. And two, even if the universe is matter-dominated, you don't get the exponential growth that you get in a static universe, you get only a power-log growth. So to see how this works, let's go to a Euclidean, totally matter-dominated universe, so that omega in matter is equal to one, and for a flat matter-dominated universe, the Hubble parameter is two-thirds one over the age of the universe. So you can plug all of this information back into our master equation for the growth of density perturbations smaller than the Hubble distance, and that looks like an integratable equation, doesn't it? In fact, you know, it kind of smells like a power-law solution, and in fact, yes, you can verify by substitution, there are two solutions, there's one growing mode going as t to the two-thirds power, and another decaying mode that goes as one over t. And again, the constant C1 and C2, that depends upon your initial conditions, but eventually the growing mode, the mode going as t to the two-thirds, will dominate. So once the growing mode dominates, the over-density delta grows as t to the two-thirds, but that's just the time dependence of the scale factor in a matter-dominated universe, and the scale factor goes as one over one plus z. This has interesting implications for the universe we live in, because we're in a universe where the matter-dominated era was of limited duration. It lasted only from the time of radiation matter equality at a redshift of about 3400 to the time of matter lambda equality at a very small redshift of about 0.3. So density perturbations smaller than the Hubble distance can grow only during the matter-dominated phase. The growth in delta from the end of the radiation-dominated era to the beginning of the lambda-dominated era, that's just the ratio of one plus z for the beginning and end of the era, radiation matter equality, again a redshift of around 3440, and matter lambda equality very recently at a redshift of 0.31. So this is the maximum possible growth. You'd make that division, it's around 2600. So gravitational instability, it makes things, it makes perturbations grow in amplitude, but there's a limit. Since the matter-dominated era was of limited duration, you can only get growth in amplitude by a factor of around 2600. So if you pick out a region of space that at the time of radiation matter equality had delta, a dimensionless density perturbation, smaller in the amplitude than 1 over 2600, or about 4 times 10 to the minus 4, it's never going to grow to delta of 1. I was asking about the limits of linear perturbation theory. Taking it to delta of 1 is pushing it beyond the limits. However, once you go to delta approaching 1, then you really want to do some numerical simulations. I'm just using delta approximately 1 to get us some rough ideas of what's going on. Once delta approaches 1, the perturbation, which initially was expanding just a little bit more slowly than the universe, stops expanding at delta of 1-ish, and then re-collapses again. Virializes makes a nice virialized dark halo and virial equilibrium. So delta, you'll notice I write delta twiddle 1. In fact, as you approach delta of 1, the linear perturbation theory breaks down. However, it's a useful rule of thumb once you get to delta of 1-ish. You stop expanding along with the Hubble flow, and then re-collapse to form a bound structure. So here we are, entering the lambda-dominated phase, and so the growth in density perturbations in the matter is shutting down. The biggest objects we see today are the superclusters in the process of collapse, like the hydroscentaurus supercluster over here, towards which we are being gravitationally accelerated. And now in the whole lambda-CDM framework, these are the biggest structures that will ever be able to gravitationally collapse. Okay, a few words about baryons. On very small scales, the universe can be extremely inhomogeneous. For instance, consider a sphere of diameter 3 meters centered upon your belly button, your navel right there. It contains you and some air and some furniture and the people next to you. In round numbers, it has an over-density of about 10 to the 28. 28 orders of magnitude denser than the universe as a whole. Now, upper right, consider a sphere whose diameter is 3 astronomical units centered on you. That's a sphere large enough to contain the sun, a nice massive star, and so the over-density of this sphere is about 10 to the 22. Really, very over-dense. If you take a sphere whose diameter is 3 megaparsecs centered on your belly button, it contains both the galaxy and our neighboring galaxy, Andromeda, here known by its catalog number, M31. And in this big sphere, you're still over-dense by a factor of about 10 relative to the universe as a whole. It's only when you get to the hundreds of megaparsecs scale where you average to about the mean, better density of the universe. The secret to reaching these very high densities, 10 to the 22, 10 to the 28, is that baryonic matter, the stuff of which you are made and the sun is made, can reach high densities because it radiates away excess thermal energy. So that structures can no longer be pressure supported and can collapse down to very high densities. So baryonic matter can reach extraordinarily high densities. First thing to know. Second thing to know, let's look at the pie chart, showing how baryonic matter is distributed today. Yes, I know it's another pie chart. Here we're taking that 4.8% of the universe that's made of baryons today and breaking it up into component slices. So stars, etc. That's stars plus substellar objects like brown dwarfs and planets. Stellar remnants like white dwarfs and neutron stars. Only 7% of the total baryon density. Another 1% in interstellar gas between the stars within galaxies. Interstellar dust is too low and mass to be included in this pie chart. Circumgalactic gas around the galaxy bounds to it but not in the same region where the stars occupy. Another few percent. Intra cluster gas between galaxies in clusters like the coma cluster. Another 4% of the total. So about 15% of baryons are in gravitationally bound structures. Galaxies and clusters of galaxies. And that's it, only 15%. Where's the rest of the baryons? Well, there's diffuse intergalactic gas. This is the region of space where the density is below average. And this is the gas that occupies those nearly empty intergalactic voids in the large scale structure of the universe. The temperature of the diffuse intergalactic gas typically less than 10 to the 5 Kelvin. But the gas is highly ionized. The remaining big wedge of the pie that's called the warm hot intergalactic medium. It's somewhat overdense, about 30 times overdense relative to the average baryon density. It's quite hot, typically around 10 to the 6 Kelvin. But for the baryons, this is the gas that you find along the filaments of the cosmic web. It's hot because it's shock heated and because it's hot it's ionized. So first lesson we take from this pie chart. Although baryons can make very dense objects. Most of the baryonic matter in the universe is not in very dense objects. It's in low density diffuse hot ionized intergalactic gas. So making stars is not something that every baryon does. The second lesson is, oh crap, the baryons are ionized again. Notice that these two big wedges of intergalactic gas highly ionized. Most of the interstellar circumgalactic intercluster glass is ionized too. So I went to all this trouble to describe recombination. When the universe was about a quarter of a million years old, but whoa, wait, it's all ionized again. So how did that happen? First thing to ask is when did that happen? When did reionization occur? Recombination occurred at a time of a quarter of a million years. Recombination was followed by last scattering and in fact by looking at the last scattering surface. We can deduce the time of reionization because, well, I've shown you this cartoon before. You're in this transparent post-recombination universe. Well, it's not perfectly transparent. It's slightly translucent. Because of that hot ionized intergalactic gas, there are free electrons present in the nearby universe. And so CMB photons have to travel through this foreground screen of free electrons that can scatter them. So when we look at the last scattering surface, we're looking through something that's slightly translucent. We're looking through a windshield that's a little bit dusty, a little bit smudged. And so we lose a little bit of the fine detail in our image of the cosmic microwave background. Because of the ionized translucent foreground material, the ionized intergalactic medium, our view of the last scattering surface slightly blurred. And put in this picture a translucent screen, you lose a little bit of the fine detail. So from the slight smearing of the CMB temperature anisotropies, you can deduce what the optical depth is due to the ionized gas between us and the last scattering surface. And the most recent analysis of the Planck results came out on archive just last month, May. They come up with an optical depth tau of 0.055 plus or minus 0.009. So when the optical depth tau is much less than 1, it's just telling you the probability that a photon will scatter. So about one in 18 cosmic microwave backgrounds has scattered from a free electron in intergalactic space on its way from the last scattering surface to us. Notice if tau were much greater than 1 than CMB photons would scatter multiple times before reaching us, and they would lose all information about where they came from, completely smearing out our view of the cosmic microwave background. However, since tau is small, it's just a slight smearing. So tau, the optical depth from the free electrons in intergalactic space, is the observable. How do you get from that to the time of reionization? Unfortunately, the physics is simple. The rate at which a photon, here the example is a CMB photon, the rate at which it scatters from free electrons, that's just gamma, number density of scatterers, in this case the number density of free electrons, times the cross section, that's just the Thompson cross section, and multiplied by the relative speed of the photon and the scatterer, which of course is the speed of light C. So if the baryonic gas in the universe is reionized, starting at some time, let's call it T sub R, R for reionization, then you can just compute the optical depth, that's just the scattering rate from the beginning of reionization until now. So gamma, you just integrate over time. So tau, that's telling you how the number density of free electrons varied with time from the time of reionization until now. So if I'm going to finish this calculation, I'm going to make some dramatic simplifying assumptions. Assumption one, it's all hydrogen. Same assumption I made when discussing recombination, it just makes things simpler. And I'm going to assume instant total homogeneous reionization thing at the time T sub R. Unlike recombination, which was a gradual process for which we could use the Saha equation, reionization was a sudden event and I'm just going to approximate it as being instantaneous rather than merely rapid. So with these assumptions, once the universe reionizes at a time T sub R, the number density of electrons, free electrons is equal to the number density of free protons. That's charge neutrality plus the assumption it's all hydrogen. And if it's all hydrogen, the number density of protons is the number density of baryons, which falls off as 1 over the cube of the scale factor. So then the optical depth, well, now it's reduced to an integral of 1 over the cube of the scale factor from the time of the reionization until now gamma sub zero. That's the scattering rate now with the assumption that all baryons are hydrogen and they're uniformly distributed and they're all totally ionized. It's an approximation, but now it gives us a nice order of magnitude estimates to play with. Since the baryon density today is well known from the CMB and the predictions of Big Bang Nucleosynthesis, we can compute what the scattering rate is today. I'm writing it in these rather unusual units of inverse giga-years to drive home the point that this is a low scattering rate. It's only about 0.2% of the Hubble constant. So it is unlikely today for a CMB photon to scatter from a free electron in the intergalactic gas. So there's an analytic solution. Reionization occurred during the matter-dominated epic, and so we have to integrate over the matter-dominated and lambda-dominated eras. So our result depends upon omega-matter and omega-lambda, relative values of the scattering rate, gamma sub-zero, and the expansion rate, h sub-zero. But I've just pointed out h sub-zero is tiny. Well, not excruciatingly tiny, but it's small compared to the Hubble constant, h0. And so using our favorite parameters, a Hubble constant of 68, omega-lambda, 0.69, omega-matter, 0.3, and the most recent value for the optical depth, about 0.055, plug, turn, crank, get out a redshift for recombination, which turns out to be about 7. And again, in our benchmark model, we can convert that into an age for the universe of about 2 thirds of a giga-year. Which I find very interesting, because the era of neutrality, the length of time when baryons were primarily in the form of neutral atoms, was just a brief interlude in the history of the universe. The time between recombination and re-ionization is only about 5% the total age of the universe. Okay, I want to end up with an astronomical question. What happened around a redshift of 7 that could have re-ionized the universe? Any astronomers lurking in the audience? Little hints. I showed you the highest redshift known galaxies. Redshift over 8.68. So a redshift of around 7 was when galaxies were starting to form. And so galaxies contain two sources of ionizing photons, ultraviolet photons with an energy greater than 13.6 electron volts, hot stars, the O stars of the spectral sequence. If there are any shy astronomers lurking in the back, I'm sure you know what an O star is. But the very hot, very luminous, short-lived stars produce ultraviolet photons. So to do active galactic nuclei, central supermassive black holes that are accreting gaseous baryonic matter and emitting light. They emit light at a wide variety of wavelengths, including ultraviolet. So intergalactic gas at the time of re-ionization was photo-ionized. So just as before recombination, you had photo-ionized baryonic matter. So to after re-ionization, you have photo-ionized baryonic matter. However, now after re-ionization, the photo-ionization comes not from the cosmic background radiation, but rather from individual sources. Now, if you look at the number density of quasars, that is to say luminous active galactic nuclei is a function of redshift from z equals 8 to z equals 0. So the arrow of time goes from right to left here. The best estimates are it goes up rapidly and down rapidly. There was a relatively short-lived period where active galactic nuclei were really, really active from redshifts of 3 to 2. So probably there were too few AGN at a redshift greater than 7 to do the job of re-ionization, but you'll notice at a redshift 7 there's this big question mark. Really, really hard to see quasars at redshifts greater than 6. So that's an unknown. When it's a known unknown, people are looking for quasars and galaxies at higher and higher redshifts to help pin down what's happening at a high redshift end. If you look at the star formation rate, again from a redshift of z until now, because the hot, luminous, massive stars that produce ionizing photons are so short-lived, basically if you look at where stars are forming, that's where these short-lived O-stars are. So the star formation rate also goes up and down during this epic, but note it's not so dramatic a shift. Each of these plots is three orders of magnitude vertically. So probably, maybe, there were enough stars around a redshift of 7 or 8 to re-ionize the universe. But again, our knowledge of what was happening at such very high redshifts is still tentative. So re-ionization, much work remains to be done. And in particular, the assumption of instance total homogeneous re-ionization is a huge fraud. Presumably, you started to get bubbles of re-ionized gas around the first galaxies to form that eventually merged to form a continuous intergalactic ionized medium. And so re-ionization, much work remains to be done. Cosmology in general, much work remains to be done. And I encourage you in your future cosmological career to think about the many, many cosmological problems that will be mentioned in this two-week cosmology course. However, the one piece of information that I want to leave you with is don't go into cosmology for the money. Just doesn't work out that way. But thank you very much.