 about the evidence for dark matter and what we can learn about dark matter from looking at the present day universe last time. Today, I want to take a jump back in the universe's history and talk about what we currently believe we know about dark matter from early universe cosmology. So goal at the end of today's lecture is that you can describe what early universe cosmology tells us about what properties dark matter has to have. I'm going to include in this the formation of structure, so eventually this will connect back to what we talked about yesterday. Now, I am not going to go through and derive all these constraints in detail. By the end, you should at least know broadly how they work and where to find information on them if you need it, if you need the more detailed analysis. And if we have time at the end, I also want to go through the calculation of the dark to show you one way to get the correct dark matter abundance through thermal freeze out from the standard model. OK, so I want to begin by just doing a review of the aspects of cosmology that we're going to need for the rest of the lecture. So for those of you who've already taken a cosmology class and don't need the refresher, you can relax for the next little bit. So I'll also say this is not going to be like an in-depth discussion of cosmology. It's really going to be focused on understanding these constraints and what you need to get to them. But OK. So again, as yesterday, keep the questions coming. They're very welcome. I may not always know the answer, but if I don't know the answer immediately, I can look it up later or point you to a reference. OK, so we live in an expanding universe. This expanding universe is described by solving the Einstein equations of general relativity. We can make an approximation that our universe is approximately spatially homogeneous and isotropic on large scales, and that its matter energy content can be reasonably well approximated as a linear combination of several perfect fluids. If we make those approximations, then we can write down an exact solution for the Einstein equations of general relativity. I'm not going to go through that calculation. But what you end up with is an expression for the metric of spacetime that describes the spacetime distance between any two points. We just call the Friedman-Lomato-Robertson Walker metric, the FLRW metric, which has the form OK. So what's going on in this metric? So this A of t is called the scale factor. Our expansion factor, it describes how the universe is expanding over time. This d omega squared is just the regular angular matrix element. This k parameter can be 0 or plus 1 or minus 1. In principle, these all give rise to solutions that have the properties I just described, that they're spatially homogeneous and isotropic. We refer to k equals 0 as being spatially flat, and k equals 1 and minus 1 being positively and negatively curved spaces. So these are like the analogs of a sphere versus a hyperboloid, except over three dimensions instead of being two-dimensional surfaces. We'll leave the k term in the equations, at least for a while, just so that you can see what happens when it's there. But to a very good approximation, our universe in the present day appears to be spatially flat. So once we start actually doing calculations, I'm pretty much always going to take k to be 0. But in principle, it didn't need to be that way. The fact that our universe is as flat as it is is one of the pieces of evidence for inflation in the early universe. So if we set k to 0, this is just written in polar coordinates, the metric for flat space. So the difference from ordinary flat space time with no gravity is just that there's this expansion factor, this scale factor, which causes all distances between two points are fixed. Coordinate distances are from each other. The physical distance between them expands over time, or could contract over time in principle, but our universe is expanding. Now, the evolution of this scale factor is governed by the Friedman equation. To write this equation, I'm going to define a parameter, which is the Hubble scale, which is just the derivative of the log of the scale factor with respect to time. The Friedman equation just comes from applying the Einstein equations of general relativity to this metric, the nice simple equation. And it relates the evolution of the scale factor to the energy density content of the universe and to its curvature. So this row here is the total energy density. So we were using row last time to mean mass density for non-relativistic dark matter mass density is the same as energy density for radiation. Obviously, there's no mass, but there's still an energy density. So I said that this solution comes from assuming that the matter and energy content of the universe is a perfect fluid. So a perfect fluid is described by a simple equation of state, a simple relationship between its energy density and its pressure. So different kinds of perfect fluids are characterized by different values of this parameter, W. And just from energy conservation in an expanding universe, we can write down a relationship between the way that this energy density scales with the scale factor and this equation of state parameter W. If we have more time, I would take you through the derivations of these quantities. I'm not going to do that today, so you can just take my word for it. But if you don't know how to get these, I can point you to a reference. So let's think about the different kinds of perfect fluids that we may need to worry about appearing in our universe. So a significant one is radiation. Our universe is filled with a bath of photons left over from the Big Bang from the very earliest times. Today we call that the cosmic microwave background radiation, but that bath of photons has been around for the universe's whole history. So radiation has a W of a third. So the energy density is three times the pressure, and you can get this like just from a kinetic picture, just imagining a box of photons, figuring out the pressure exerted by the momentum of those photons on the walls of the box and comparing it to the total energy density. So putting that into this equation, that tells you that the energy density behaves as the scales, as the fourth power of the scale factor. We'll hold off on the interpretation of that for a moment until we talk about pressureless matter. So this radiation, this applies to any particles that are relativistic in the frame in which this metric is defined. The matter number will also apply to just non-relativistic particles. So pressureless matter has no pressure, or at least its pressure is parametrically suppressed by powers of its small velocity relative to its energy density. So p equals 0, its first approximation. So this tells us that the energy density of matter behaves as the scale factor to the minus 3. So let's just understand where they come from. This is just saying that if I have a bunch of matter particles in an expanding universe, they're all at rest. If the volume of the universe expands by a factor of 2, the number of the matter particles isn't changing. So the energy density will go down like a factor of the volume. So that's what this A to the minus 3 is. Linear scale is A, volume scale is like k cubed. Density goes like 1 of A cubed. For radiation, we have the same three powers of A that just come from the dilution of the volume, but we also have one extra power of A. Does anyone know where the extra power of A comes from? Yep, good. I heard that from a few different people. OK, yeah. So the other power of A is coming from the fact that as the universe expands, the wavelength of light and other forms of radiation get stretched out that dilutes the energy by a factor of A. This is volume dilution. So this stretching of the wavelength proportionally to A so that the energy scales as 1 of A is called cosmological redshift. The third perfect fluid that we're going to have to think about, which is considerably more exotic but does appear observationally to exist in our universe, is dark energy, which has an equation of state parameter of minus 1. And I'll say a little bit more on that in a sec. And this means that it actually has a negative pressure, which tells us that it has this rather weird behavior that the energy density doesn't scale with A at all. So dark energy people sometimes talk about it about the energy contained in free space, that I double the volume of space. I double the amount of dark energy such that my energy density just remains constant. The actual constraint on this number, this is a fairly recent constraint. From this Alameda paper in 2017, the measured W is minus 1.01 fossil minus 0.04. So there appears to be a component in our universe that behaves like this simple dark energy picture to a pretty good approximation. It's possible that W might not be exactly 1 or that W might be evolving in redshift. People look for those. But at the moment, there's no particularly strong evidence for an evolution away from simple dark energy. Although as we discussed in the discussion session yesterday, the cosmological picture that I'm going to tell you seems to work very well. There are some discrepancies at the sort of 3 to 4 sigma level, which could be telling us something really important or could be telling us that someone has misestimated their systematic error bars. OK, so these are the components that we believe make up this energy density contribution for our universe. Just to introduce a little bit more notation. So let's define what we call the critical density, which would just be the actual density in the case that k equals 0. So for our universe, this critical density is going to be pretty close to the truth. So then we're going to define this critical density. And we're also going to break down the overall density into components that correspond to matter, radiation, and dark energy. So we'll have a matter component, radiation component, and a dark energy component, which I'll indicate by this lambda symbol. So here, 0 subscripts indicate the present day. And I'll introduce one other piece of notation, which is, as we said before, the wavelength of light or radiation gets stretched out by the cosmological redshift. We can define a redshift factor, 1 plus z, which just describes the size of the universe today divided by the size of the universe at some early time. So for example, at redshift, 999, this factor is 1,000. That means that the universe today, the linear scales of the universe today, are 1,000 times larger than they were at that corresponding time. That redshift 1,000 is an important number for reasons that we'll get to in a moment. It corresponds to a t of a few hundred thousand years. So this t is measured from the start of the universe, not backwards from the present day. OK, so let's. So we've written our row as a linear combination of these components with their different redshift scalings. These 0 subscripts mean the density today for matter of radiation for dark energy. We think the density is the same in all times. We don't really need to subscript it, but sure, we can anyway. So then, what we can write is that let's also define a parameter that we'll call omega and x here can be radiation or matter or dark energy. The way the omega is defined is the present day density in a particular component normalized to the critical density today. So these are all just definitions, but this is a convenient way to express things. It's just a convenient way to express dimensionally what fractions of the universe are in various components in the present day. And we can define similarly a component for the curvature. And having defined these quantities, we can then rewrite the Hubble equation in the form. We can just write, we can write the Hubble equation in this form. So having measured these quantities, these ratios of the energy density today to the critical density for each of these components, we can infer what this Hubble parameter describing the scale factor should have been at any time in the universe's past history. And so Planck, the CMB experiments measuring the cosmic microwave background, which I talked a little bit about yesterday and we'll talk a bit more about in this lecture. I talked about in the discussion session yesterday, have provided pretty great measurements of these parameters. So from the Planck 2018 cosmology paper, and you look at different experiments and you'll see these numbers will shift around a little bit. But what I give you will be fairly representative. Irradiation, energy density, we just know pretty well because it's a black body and we've measured the temperature of that black body very precisely. It's a pretty sub-dominant component in the present day. It's only about one part in 10 to the four of the total energy density. The matter component, this is using only Planck data, the only CMB data, including the lensing component. For those of you who are experts and want the details, you can see these numbers shift around just a little bit depending on what data set you use. But it's pretty consistent. So the matter component today is about 30% of the critical density. And this includes both dark matter and baryonic matter. The dark energy density makes up the rest. The curvature component we think is zero. This is the limit in the Planck 2017 paper. So that's our universe. This matter component is mostly dark matter. The baryonic component is about 0.049. So it's about 5% of the total energy density in the universe and the other 26% or so in this portion we believe as dark matter. And I'll talk a little bit more later about how these measurements are actually done. But this is our lambda CDM picture of how the universe behaves and how the scale factor evolves over time. So I want to say, OK, so we've already talked about cosmological redshift. So light that's emitted a redshift 1,000 has, z of 1,000 has its wavelength increased by a factor of 1,001 by the time it gets to us. I'll normally mean z when I talk about redshift. So when I say redshift 1,000, I mean z of 1,000. If there's any ambiguity, let me know. The other concept that is important is co-moving volume. So this is a volume. When I talk about a co-moving volume, I mean a volume that expands with the evolution of the universe. Physical volume corresponding to a given co-moving volume scales as the scale factor cubed. Scales as a cubed, which is so 1 plus z to the minus 3. So because I'm going to talk in several times. So the co-moving volume is a convenient concept because if you don't have any number changing processes and you just have some box of matter to begin with the amount of matter within a co-moving volume stays constant. OK. So the other thing that I want to say in this cosmology intro is so from this picture, we can see which components are going to dominate the energy density of the universe and hence the evolution of the scale factor and the expansion rate at different times. So at sufficiently early times, sufficiently higher edge shifts, this 1 plus z to the fourth factor will become extremely large. And what is a sub-dominant component today will actually dominate the energy density of the universe. As time goes on, the radiation density will fall more rapidly. So z becomes smaller. The radiation density will fall more rapidly than these other components. So the next one that kicks in is the matter energy density. We go through a period of matter domination. Now the next thing that you would think might kick in would be the curvature term. Since that goes like 1 plus z squared. But since we know that the curvature term's coefficient is extremely tiny, we actually don't expect that our universe has any period of curvature domination of late times. Instead, we see which over from matter domination into the epoch of dark energy domination, which we have relatively recently entered, and which is our cosmology today. Unless there's some other component here with a 1 plus z scaling that's negative, this is where we expect to live for the foreseeable future. So this term dominates at early times. This term dominates late. OK, I think that's basically all the tools you're going to need to know about cosmology for what we're going to do over the next couple of weeks. Yeah, question? So these uncertainties are marginalized over the uncertainty in the Hubble parameter. The Hubble parameter as obtained by Planck is like 67 kilometers per second per megaparsec. I mean, I can pull up the actual uncertainties in the Planck paper. But yeah, these uncertainties use the, well, they're CMB based. So they're tacitly using the value for the Hubble parameter obtained by the CMB. If you move, so the lower, the lower edge shift measurements give results for Hubble that are closer to like 70 kilometers per second per megaparsec. So that's like a few percent difference. Yeah, these errors are like the couple of percent level. So if you inflated your error on your Hubble parameter to take into account that maybe the CMB is underestimating its systematic uncertainties, it would increase the error bars on this a little bit. So it's the value of self-consistently obtained from the Planck measurements, which is about 67 kilometers per second per megaparsec. I don't have the number to for significant figures off the top of my head, but it's in the Planck 2018 paper. But yeah, I mean, the way that this is obtained is by a multi-parameter fit to the Planck data. And yeah, there's a degeneracy, which means that these parameters times H2 are better constrained than just these parameters alone. But the error bars on these parameters are meant to take into account at least Planck's uncertainty bar on the Hubble parameter. OK, so, sorry, cosmology review in hand. Let's think about what we can learn from the early universe about the properties of dark matter. So I want to sort of start at the earliest times that we can probe observationally and then move forward. So the earliest constraints that we have for the earliest direct observational constraints that we have on the early universe come from the epoch of Big Bang nucleosynthesis and then sometime later from the epoch of recombination. And these constraints allow us to set some pretty tight constraints, essentially, on how much radiation there was. So on this, these are very early times. During BBN, this radiation-dominated term takes over and controls the expansion of the universe. That allows us to control light degrees of freedom that were relativistic at that epoch. So let's do a little. So our earliest observations come from the epochs of Big Bang nucleosynthesis, BBN. So BBN starts occurring when the temperature of the universe drops to a point that's comparable to the proton neutron mass splitting. This is the epoch where helium and the other light elements are first formed. So we can get an observational probe on what's going on here by looking at the present day abundance of those light elements. So this occurs around a temperature of 1 MeV. Although it's not an instant process, it continues for some time after this. But at temperatures much higher than this, there weren't a lot of nuclei around. OK, so we then have a later constraint coming from the epoch of recombination. So this first epoch occurs when you start to form atomic nuclei. You start to form the light elements. The second epoch forms at the somewhat lower temperature where protons and electrons start to form into hydrogen atoms. So the scale that's relevant for this is the binding energy of hydrogen, which is about 13.6 Ev. Recombination really kicks off once the temperature of the universe drops to around half an Ev. That's the point at which even the tail of the black body distribution of photons is just not enough to keep the ions in an ionized state. So prior to this epoch, there are free neutrons around. There are free protons around. Not a lot of nuclei, any bigger than that. Prior to this epoch, the universe was basically 100% ionized. So this epoch is also when, because the universe goes from ionized to neutral, photons like to scatter off charged particles. They do so efficiently. They don't scatter off neutral particles nearly as efficiently. So this point where the universe goes from ionized to neutral also coincides with the universe becoming transparent to low energy photons. And in particular, to the photons in this red-shifting bath of photons that's left over from the earliest history of the universe. So this epoch of recombination is also called the epoch of last scattering, because it's when many of the photons in the bath last scattered off anything that's not our telescopes. When we see them today, that's the first time they've encountered any particles since the universe was a few hundred thousand years old. So in those photons that are released to free stream through the now transparent universe are called the cosmic microwave background radiation. So we have these two. So the observable from the first epoch is the present-day abundances of the line nuclei. The observable from the second epoch is these photons that are produced. Now both of these epochs set a constraint on just row radiation on this parameter, but at early times. So the way that we typically parametrize, so remember, row radiation is not at early times, is not necessarily just the rescaling of this quantity, because today our omega radiation term, it consists of photons primarily. But as you go back into the early universe, well first the neutrinos will become relativistic. If you go back to sufficiently early times, there will be other particles that are potentially highly relativistic as well. Once the temperature of the universe gets large enough, once their energies are blue shifted back to very high values. And so omega rad at early times could potentially diverge from a naive extrapolation. So we write omega radiation, we write sorry, the energy density in radiation as the contribution from the photons, which we know should be there. Plus the contribution from the neutrino species, which we know should be there. So this is a contribution from one neutrino species, so we multiply that by three in the standard models. We know we have three neutrino species. And then the way that extra degrees of freedom are typically parametrized is just by adding an extra term to the number of neutrinos. So if you have a species that has a very different energy density than the neutrinos, then it's effective contribution to the number of neutrino species will be rescaled accordingly, okay? This is really like it's just measuring total energy. So this we call this quantity n effective. Well, I sometimes call it nf, but then people don't understand what I mean. So if in any doubt you should spell out n effective. So this is essentially counting the relativistic degrees of freedom. But we did, but determined by, but weirded by their energy density. So in particular, the energy density of the radiation species is proportional to its temperature to the fourth power. So what that means is that if I have a species that has one degree of freedom, but is much, much, much hotter than the neutrinos or the photons, it will give a disproportionately large contribution to this n effective value as t to the four. So even a relatively small change in temperature can have a big effect here. But on the other hand, if I have a species that is say colder than the neutrinos, it's a factor of two colder, then it can be very hard to see in this observable. So our current constraint. So the full standard model calculation, this number for neutrinos is not actually exactly three. It's, yeah, it's, there's additional contribution just because of the, yeah, okay. The way that this row new is calculated is based on a, is based on a simplified estimate of the temperature of the neutrinos is actually slightly different from that. So in the standard model, we have the expected number of effective neutrino degrees of freedom is 3.046. What BBN tells us is that the change in an effective relative to this number should be less than or equal to one approximately. So the archive reference for this, give that here. So this tells us that, broadly speaking, at the time of BBN, so at a temperature of one MEV, you can't have more than one extra neutrino-like degree of freedom in the universe. Question up there? So at sufficiently early times, the mass of the neutrinos doesn't really matter. It's they're highly relativistic species. So their energy density really only depends on their temperature. So, yeah, good. Yeah, okay, good. So the question is, why do we assume that all three neutrino species have the same temperature given that their production processes might be somewhat different? So the standard picture for how neutrino, for how neutrinos get the temperature that they do is in the early universe, there was a bath of neutrinos, same as the standard model, before the electrons and positrons went non-relativistic, the neutrinos could efficiently scatter on the electron and positron bar and were in thermal equilibrium with the standard model. So, I mean, so, yep, the processes have different rates, but if the density of what you're scattering on is sufficiently high and the rate is large enough to keep it in thermal equilibrium, then it doesn't really matter exactly how strong the scattering rate is. So the point at which the neutrinos decoupled was largely set not by the exact size of the scattering rates, but by the fact that when the neutrinos and positrons go non-relativistic, their abundance abruptly drops from having the abundance of a radiation species to having the abundance that they have today, which is about a nine or 10 order of magnitude decrease, which happened relatively quickly when the universe reached a temperature comparable to the electron and positron mass, around 100 keV. And so as a consequence, what we think is my understanding is that essentially all the neutrinos decoupled from the standard model at that time. So they had the same temperature as the standard model to that point. After that point, they didn't have the same temperature as the standard model anymore. It's a different temperature of the photons because that same process of the electrons and positrons depleting their abundance and annihilating away produces a lot of photons. It doesn't produce neutrinos. So that heats up the photon bath because you just injected a whole bunch of extra photons from electron and positron annihilation and it doesn't heat the neutrinos commensurately. So yeah, I mean there should be details in the process because the decoupling will happen at slightly different times, but it happens at a pretty similar time because it's mostly just governed by that very sharp drop-off in the electron and positron density. Question there? Yeah, good, yeah. So degree of freedom here does not necessarily mean number of particles. The degree of freedom counting differs depending on whether they're bosons or fermions or whether I mean if they're bosons, are we talking about scalars or vector bosons? There's a degree of freedom, it's a degree of freedom count in here. Question up there? So good, so the relationship between this expression and the omega radiation that we measured today depends strongly on if any of these species produced early times are still around and are still relativistic in the present day. So the photons are still around and still relativistic in the present day. The neutrinos are still around, but at least most of them should be non-relativistic by the present day, since we know that the sum of their masses is in the order of 0.1 EV and the temperature of the radiation bath today is about 10 to the minus four EV and the neutrinos are somewhat colder than the photons. If I, so usually in scenarios with X, so if it's the dark matter, we know that it has to be non-relativistic today for reasons that I will show you. And so in that case, right, good. So usually the assumption that people, yeah, so usually if it's the dark matter, they can't contribute to omega radiation today, if it's the bulk of the dark matter, but you could have an exotic species, some dark radiation that is around at early times and still endures today if it's cold enough. When I write down this omega rad, I don't think that's actually a constraint. That's just based on like taking, that's just based on taking the CMB energy density. I don't think the CMB in the pre, just because omega, just because the radiation contribution is so small today, I don't think the CMB actually gives very tight constraints on this component. Like I think that number is just taken from taking the black body spectrum of the CM. This is actually omega gamma, not. I have to double check if it's omega gamma plus the neutrino contribution or just omega gamma, but that's a calculation, not a constraint. So if you had a dark radiation bath, it could in principle still be showing up in the omega red contribution today, but because we're deep into dark energy and matter domination, I think it's not well constrained. The strong constraints come from earlier times when the radiation component was relatively more important. Good question, yeah? Yeah, so I, yeah, so for the, oh crap, did I write this wrong? So we usually put the, yeah, so the, so the, so the real neutrinos that I have here has in it, so it depends on the temperature. Yeah, okay, sorry, have I said this right? I see, you want, yeah, okay, you wanna say that my real neutrinos isn't really properly my neutrino energy density here because it's, yeah, okay, yes. Okay. Let me, yeah, so yeah, good, you're right. Yeah, this, this, this renew here is the pre-factor of the three here, but you're right, that's probably as prob, I mean this should still be the, okay, this should still be the, this is the energy density of one neutrino species, like that is, that is physically what it is. The relationship between, why just, so the relationship between that and the temperature of the, and the temperature of the universe will depend on, will depend on the number of effective degrees of freedom that are coming on, I think has a seven of eight in the calculation. But yeah, I mean this is, I think I did this right, I mean, yeah, this is just a, this is well up to the difference between three and 3.046, which is just a convention choice. I am fairly sure that this is how, that this is how this is defined. It's like just, yeah, it's, it's the number of effective degrees of freedom normalized through having the same energy density as the neutrino, so if I have a new species that is like just a scalar, it has one degree, it has one effective degree of freedom for these purposes, and I wanna understand how to apply this constraint, then, like it's not, then, yeah, I need to use, I need to, I need to go back and check how this, how this will be is calculated. Because, so given any species, given some temperature, given some number of degrees of freedom, I can work out the effect of energy density corresponding that species, and the ratio of that with what it would be for neutrinos, which will depend so both on the degrees of freedom and temperature of my new species, and the degrees of freedom and temperature assumed for the neutrino field, that ratio will give me the effective contribution to an effective, okay. So, that's the constraint from BBN, tells you when the temperature of the universe was about an MEV, you don't wanna add more than one effective neutrino species, but from the cosmic microwave background, we have a tighter constraint, but it's at lower energies, so from the CMB, we can set a constraint on an effective, which is 3.15 plus or minus 0.23. So, this is at a temperature of 0.4 EV, and this result comes from the Planck 2015 analysis. So far, the number of effective degrees of freedom that we measure in the early universe looks pretty consistent with the standard model. You can fit in extra degrees of freedom that are light at this epoch, so these constraints essentially don't apply to dark matter that's already heavy and non-relativistic by this epoch. It goes into the matter density budget, not into the radiation energy density budget, but this can be, in particular, this is pretty constraining on dark matter models that involve new particles that are below an MEV in mass, both because if they're still relativistic, they can affect this calculation directly. Now, if they're much colder, than the standard model bath, then that can still be fine. If you're lighter than an MEV but cold, then your effective energy density and radiation scales like T to the 4 and as much smaller. However, if those particles subsequently decay away or annihilate away and produce standard model particles, photons or neutrinos, then that can change the effective energy density of photons or neutrinos for purposes of the CMB, for purposes of this constraint that occurs at later times. So the bottom line here is that these limits constrain new relativistic, degrees of freedom, MEV temperatures. This, in general, what this does is it means that if you have a new dark matter particle that is coupled to the standard model, that is at the same temperature as the standard model and has a mass less than about an MEV, so it's still relativistic post-BBN, it can be difficult to reconcile with these constraints and you should think about it carefully. There's a cute little loophole here which was laid out in a paper by Bolin and Blinov in 2017, which is that if you have a situation where your dark matter's colder than the standard model at BBN, it becomes coupled to the standard model after BBN but before the CMB, so then it gives a contribution to the radiation energy density, but then it decouples from the standard model again, example it decays away, before the CMB constraints kick in, so in that MEV to MEV window, then you can actually evade the constraints. And similarly, if when you decay, you decay to photons and neutrinos in exactly the right ratio, then you can have a situation where you are also relatively unconstrained, but in general, if you have a dark matter candidate that's below an MEV and is thermally coupled to the standard model, so it has a temperature similar to the standard model, you should look at these constraints. Or if it's not the dark matter, if it's just a light dark radiation particle in your theory, you should also take a look at these constraints. It may need to be somewhat colder than the standard model to be allowed. So these are the earliest constraints that we have from BBN and the CMB, but we can do much more with the CMB as you've already seen from these cosmological parameters. So let's talk a little bit more about that. So because of this fact that the photons begin to free stream once the universe becomes neutral, the CMB provides essentially a snapshot of what the universe looked like at a temperature of about 0.4 EV that corresponds to a redshift of about 1,000 and a timescale of a few hundred thousand years since the beginning of the universe. So what are these photons giving us a snapshot of? What is the background? So prior to the release of these photons, the universe is this ionized plasma that consists of, so it consists of the protons and the electrons. So it consists of protons, electrons, photons, the neutrinos, and we think the dark matter, we think for reasons that I'll explain. So this is a bath, there are no galaxies yet, there are no stars, the universe is almost perfectly homogeneous and isotropic, but there are small fluctuations in the density and the temperature that are seeded by inflation. So these perturbations in the density and temperature oscillate over time for the competing effects of gravity and radiation pressure. So if you have an over density, the gravity in this plasma, gravity will tend to increase the over density, more particles will fall onto it, but radiation pressure will tend to push the particles that are tightly coupled to the photon field away from each other. So it is essentially a picture of these fluctuations at one point in time. This is what we call the surface of last scattering. That's an approximation, really the surface of last scattering has some width, that smears out the CMB, the measurement of the fluctuations on small scales, but as a first approximation you can think of it as a snapshot of this solar universe. So at the point that this snapshot is taken, these temperature fluctuations are imprinted on the cosmic microwave background radiation, which we measure beautifully well today using experiments like Planck, the South Pole experiments like Act and SPT. And it turns out that in order to explain the observed pattern of the oscillations, you need a component that explains, that experiences gravity, but not radiation pressure or something that does something equivalent. Basically you need something to deepen the potential wells that these part that the plasma is oscillating within. Cosmological context, this is the definition of dark matter. We don't know what it is, but how we define it is as a matter component that does not feel radiation pressure. It's then a hypothesis that that stuff, the same thing that doesn't feel radiation pressure, that gives us the correct pattern of perturbations in the early universe is the same thing that's explaining the rotation curves in galaxies. But we have some evidence of that hypothesis, but it's a hypothesis. And it should be as per the things that I just erased, about five to six times more abundant. Grounds here are defined as the matter that does experience radiation pressure, as well as gravity. I should say that there's also the BBN, B-Bang-Niucleus synthesis, also gives us an independent prediction of what the Baryon density should be that agrees very well with the CMB. So in that sense, this is a consistent picture, but you appear to need some other component to get this right. So what does this tell us? Okay, so for this definition of DM, this means that DM must be present when the temperature of the universe is 0.4EV. The temperature of the photons is just their energy, so it redshifts just with one power of redshift. The temperature of the cosmic microwave background today is a few times 10 to the minus 4EV, so this corresponds to a redshift factor of 1,000. And this is less obvious, but this corresponds to a time when the age of the universe was a few hundred thousand years. So the dark matter must already have existed at this time when the universe had just become neutral for the first time. That narrows down a lot of possibilities for what it could be. It means that it can't be a collapsed object if those collapsed objects only formed once you had stars and galaxies. It could be, as we talked about yesterday in the discussion session, primordial black holes that are seeded very early in the universe's history. Like, that is in principle a completely legit possibility for at least some of the dark matter. However, we've looked for those primordial black holes with various lensing searches and have not really found them. So it seems at this point difficult for it to be 100% of the dark matter, but ask me for references if you want to know more on that. We can actually do a bit better than this baseline statement with modern day CMB measurements as well as saying that the dark matter has to be present and it has to be this abundant. We can also set pretty tight constraints on just exactly how dark does it need to be. If it has some small coupling to the photons or some small coupling to the baryons, then it can effectively feel a bit of the radiation pressure through that. It can feel some kind of drag force. We can set pretty tight constraints on that. I'm just gonna refer you to the papers. There's actually been a lot of interesting work on this in the last couple of years partly because of an anomaly last year that suggested that maybe there could be some... It's a very strange anomaly, but one of the ways to potentially get at it is to have a very small fraction of the dark matter that is not really very dark that has interactions with the baryons. Okay, so this measurement from the CMB is one of our really key pieces of evidence for the dark matter exists, that it existed in the early universe and it's really our only good handle on what its overall abundance is. Okay, but I said that it was a hypothesis that this stuff was the same as the stuff that surrounds galaxies. So why do we believe that is true? Well, these fluctuations in the cosmic microwave background, these small perturbations in density, they continue to grow after the CMB photons are released. We took a snapshot of them, but that doesn't remove the plasma itself. So once the photons decouple from the baryons, these over densities keep growing under the influence of gravity and eventually they keep growing until they reach in on linear regime and then they collapse into bound variolized structures and those fluctuations are the seeds of the dark matter halos and dark matter filaments which we believe host galaxies at late times. So to do this, once you get into the nonlinear regime to do a detailed analytic calculation is extremely hard, I mean there's some work using effective field theory techniques to try to do a little bit better than native perturbation theory, but the go-to tool for simulating the nonlinear collapse stage of the formation of these structures is large n-body simulations. As we talked about last time, these can just simulate the dark matter which is normally assumed to be very dark and approximately collisionless, although as we talked about last time you can turn on some self-interaction that can have important effects at late times. And so you can simulate just with dark matter or you can also try adding in baryonic matter, but the general picture which seems to work very well, especially on large scales, is that the dark matter forms structure first, the baryons, since they're a sub-dominant component, later fall into the dark matter source potential wells and form galaxies. So there are a couple of different limiting cases for how this can occur, depending on how fast the dark matter is going. Question? Sorry, can you say again? So in structure formation, I'll tell you the context in which the particle physics matter, but in these simulations, what you're usually making is an assumption that the dark matter is some particle that has effectively no interactions other than gravity on the scale of the simulation. The mass of the particle doesn't matter very much because in these simulations you can't resolve the individual particles anyway. Your fundamental particle unit in these simulations is typically something in the order of like 10 to the 5 or 10 to the 6 solar masses. So if you're talking about, so provided that your individual dark matter particles are smaller than that, they're all equivalent from the perspective of the simulation. Sorry, can you speak louder? Just to its mass. I mean, it has mass. In principle? Yes, but I mean it's, zero thought of the way in which it enters through the Einstein-Hilbert action is just through the stress energy tensor. So in principle, you could add non-minimal couplings to the Einstein-Hilbert action. You could add non-minimal couplings to gravity. Yeah, in the zero thought of CDM, and in principle you could have also interactions with other particles, with the standard model particles, which could also affect the result. I mean, as I've said, usually the assumption made is that those interactions are negligible compared to the effects of gravity. That's a pretty good assumption for effectively all the large-scale structure calculations. As I said last time for the small-scale structure to have an interaction that meaningfully affects the result, you need a cross-section divided by the dark matter mass of order of one centimeter squared per gram, which is a QCD-scale cross-section for a QCD-scale particle. If your cross-section is noticeably weaker than that, then sure, you can change all the grungy and all you like. It's not really going to change the result. So yeah, so the zero thought or approximation, the approximation that's made in these simulations is that the dark matter is just some featureless particle that has some mass scale smaller than the resolution of the simulation, and whose interaction with gravity is purely through. Because it has mass, because it has energy density, it contributes to the stress energy tensor, and appears that way. So I mean, you can write down a particle physics Lagrangian for this, but there are many, many modifications you can make to the particle physics Lagrangian that have very little effect on the structure formation analysis. You can also try to, so there's another kind of way, you can write down the evolution of the cosmological perturbations, so not of the dark matter particle itself, but you can characterize the evolution of the perturbations in an effective field theory framework and use that to try to do a better calculation of how the perturbations evolve in the universe's history. But I'm not really going to talk about that today, which is mentioned that exists. So what are modifications that you can make to this picture that do actually change how dark matter behaves in large scale structure? Well, the zero thought of thing that you can do is you can change how fast it's going. You can change its effect of momentum. So there are two limiting cases of this. The first is what we call cold dark matter, and this is the case that appears to be presently favored by the data. So cold dark matter is very non-relativistic throughout the epochs relevant for structure formation. Because its velocity is very small, it can easily accrete into small clumps. Those, the kinetic energy of the dark matter is small compared to the potential energy of even relatively small bound structures. The result is that small clumps form first in the early universe, and then accrete into larger structures. This means that you predict that galaxies form before galaxy clusters, and it means that you predict that in every galaxy, there are sub-clumps of dark matter within the larger halo, which are a relic from the original little clumps that accreted to form the larger galactic dark matter halo. The opposite limit is hot dark matter where the dark matter is relativistic during some of the period after it decouples from the standard model. As a result, there's a period where it's not, as a result, there's a period where these dark matter particles are free-streaming through the universe, and this means that they will not capture into sufficiently small and weakly bound structures. They just have too much kinetic energy. So what we say this is that the free-streaming of the dark matter raises smaller scale structure. So anything that's smaller than the free-streaming length just gets wiped out, it doesn't form early on. So what happens in this case is that large structures form fast, and then they fragment. So the way that you get galaxies is first you make galaxy cluster-sized halos, and then they break apart. You can distinguish observationally between these scenarios by looking at how many small-scale dark matter halos do we have, where small is like galaxy size and smaller, and by which formed first. If we look back in the universe's history, did we get clusters before or after we got the galaxies that comprised the clusters? So the answer to that basically is that this scenario agrees better with observations in our universe. Both were legitimate possibilities, but this is the one that actually appears to be observationally favored. So we can do a little bit better than that. So there's sort of the intermediate warm dark matter scenario where the free-streaming length is not large enough to erase all structures on galaxy scales and smaller, but it's still not negligible. Okay, so basically, so this erases structure below some cut-off scale. So to constrain warm dark matter, we can look at the smallest halos in the universe and say, okay, what are the smallest halos we see in the universe? How does that translate into a limit on how fast the dark matter was going at early times? So the strongest constraints, so far we've been talking about constraints that come from temperatures of the universe at the EV scale and higher, at redshift to 1,000 and higher. So now we're gonna jump forward in time because the strongest constraints on warm dark matter come from the limon alpha forest and from redshifts between about two and six. So this is getting much closer to the present day. What is the limon alpha forest? Well, so once, so at redshifts from seven to nine, there are quasars that we know about, actively radiating black holes in the early universe. The light from those quasars passes to us through the intervening redshift range and it passes through clouds of hydrogen gas. So by looking at the, so we can measure both the frequency of that radiation, we can see if it gets absorbed by the clouds and readmitted, we can look at the emission and absorption lines, the frequency of the lines tells us how far back in redshift that absorption or emission occurred and we can look at their spatial position on the sky to try to sort of get a 3D map of how the gas was distributed during this redshift range. And that allows us to put a handle on how much small scale structure we have in the matter. So we say that we are constraining the matter power spectrum, just how much structure we have on different scales. So this is essentially, so what we can actually constrain is the free streaming length, which is set by the DM velocity. Technically it's the co-moving free streaming length, so what we're actually constraining looks something like this, looks something like this integral. So the way that these constraints are usually expressed is in terms of a particular class of models of dark matter, where the dark matter was once in thermal equilibrium with the standard model and then it decoupled while it was relativistic. In this case, the late-time velocity of the dark matter is just set by its mass. So the thermal DM, the current limit from observing this limon alpha forest, that its mass has to be greater than about 5.3 kV, if it's lighter than that, then it's still going sufficiently fast at late times to disrupt structure at scales where we do observe structure. And for the detailed version of this analysis, you can do an approximate recasting of this analysis just by working out what free streaming length this corresponds to and estimating how fast the dark matter is going in your model and comparing the free streaming length. To really do it precisely right, what you need to work out is like the detailed modification to the matter power spectrum, exactly how the small scale structure gets damped in your model, but just as a quick estimate, you can evaluate the free streaming length and compare to this bound. Okay, so that's one constraint from structure formation. We can effectively constrain how fast the dark matter is going. It's consistent with going very slowly. If it's going too fast, we can set bounds. There are other limits that we can set as well. The one, again, from looking at the size of these small scale structures, if the dark matter is too light, then we won't form very small scale structures because the wavelength of the dark matter is too large to fit inside them. So this is a very light dark matter now. So if you have sufficiently light dark matter, its wavelength could be kiloparsec scale or larger, but that's excluded because we see dark matter structure on scales smaller than that. Again, the strongest constraint that I know of comes from the limon alpha forest, and that tells you that the mass of the dark matter should tell us that the mass of the dark matter has to be greater than about 10 to the minus 21 electron volts. This is like the only really model independent lower bound that I know on the dark matter mass. So it's pretty tiny. We can, if we're specifically dealing with fermions, we can do better than this because for fermions, there's a limit on how many fermions you can pack into a given small scale structure that comes from Pali exclusion. And this is called the Tremaine gun bound. Okay. So the Tremaine gun bound says, okay, let's look at the phase space density of dark matter. We can estimate the phase space density by the number density divided by the cube of the characteristic momentum. So we can write that as the mass density divided by the dark matter mass times the non-relativistic dark matter, which we'd better have at late times. We can write that like this. For a fermion, we want this phase space density to be less than or equal to two, which is a manifestation of Pali exclusion. So now this gives you a limit on the mass as a function of the dark matter density and the characteristic velocity of a system. It tells you that for a fermion, you should obey this constraint. So now we can look at the densities and velocities in dwarf galaxies. You wanna look at systems with ideally high density and low velocity. So the little dense clumps of dark matter flying around our galaxy are a pretty good candidate. You're putting in a velocity of a few or a few tens of kilometers per second and a density typical of dwarf galaxies. You find that this tells you that the dark matter mass has to be greater than or equal to a few hundred EV by just by using typical results for the density and velocity in dwarf galaxies. So in general, the dark matter can be extremely light, but if it's lighter than a few hundred EV, it has to be a boson. If it's a hundred percent of the dark matter and if it's just one species, so the Pali exclusion applies. So this gives us a picture at this point where if we try to ask what can dark matter be, what can its properties be, we sort of draw a lion of dark matter mass. We're at the bottom end. We have masses around 10 to the minus 21 EV and we know that this side of this lion is excluded because the wavelength is too large to fit inside a small halo. And then up until roughly the KEV scale, we have some pretty strange and constrained on what dark matter, well, we have some constraints at least on what dark matter has to be if it lies in this window. We know that it has to be bosonic. We know that it can't be at the same temperature as the standard model for a couple of reasons. If it's going too fast, you run into this warm dark matter constraint. This is if it was in thermal equilibrium with the standard model, then it has mass less than 5.3 KEV, then that it would be going too fast and would disrupt structure formation. So it has to be non-thermal. Well, by this I mean not in thermal equilibrium with the standard model. It could still have a thermal distribution just with a low temperature. It has to be cold in other words. So then, especially in the lower part of this range, well below an EV, the wavelength can be macroscopic. So it's often better to think of dark matter in this mass range as being more like a field or a wave than as individual particles. And that affects the way that we search for it. On the other hand, once you go above the KEV scale, you're much less constrained. Here, the dark matter can be fermionic. It can, so let's draw the MEV scale on here as well. So when this KEV to MEV range, you're probably not gonna run into velocity constraints unless the dark matter's significantly hotter than the standard model. But we have those constraints from BBN that constrain light degrees of freedom. So at this MEV scale and higher, the dark matter can be thermal, have a similar temperature to the standard model, principle, and that's not ruled out. In this KEV to MEV range, it can maybe be thermal, but you need to do some work. Because this orange, it can be thermal. So then, this opens the door, as I guess I will tell you tomorrow, since I don't think I'll be able to do it in five minutes, for the dark matter's coupling to the standard model to be the thing that determines its relic abundance. So this, the window for this to be true, that the abundance can be determined, that the abundance can be determined by connections to the standard model, works up to about a mass scale of about 100 TV. So this sort of covers the MEV to 100 TV range. And the reason for this upper cut off is essentially the, well, we'll get to that later, but it's essentially a uniterity bound. The rates that you need to get the right abundance are just too large to be accommodated. And then we can keep going. And in principle, this lion eventually has an upper limit, but it goes on for a long way. What eventually cuts off this upper mass bound? Well, the first thing that cuts it off is probably searches for lensing, for gravitational lensing of compact objects. Somewhere well before that is the Planck scale. So if you're talking about a fundamental particle, the dark matter is probably lighter than that. If you're talking about a massively composite particle, then you could have dark matter that is a bound state of a very large number of particles that lives up at scales above the Planck scale, but below the range where lensing is relevant. Okay, so this is basically the current picture of what we think we know about dark matter and how it should behave and what properties it can have. This kind of partitions the particle dark matter parameter space into two regions that have gotten a lot of interest in recent years. One is this low mass regime where the dark matter is a light called, a condensate of particles that are individually light, very cold, and must be bosonic. And then the other parameter space that's gotten a lot of, and there's a wide range of searches for this, which I believe my colleagues, Robert, and Matt are gonna say a little bit about. And then at higher masses, you're generally much less constrained in what dark matter could be, but in particular between an MEV and 100 TV, this thermal window is enticing because as you'll see, it's pretty predictive. And there's a sub-range in here between sort of GV and TV, that has been, so this is sort of the classic weekly interacting massive particle window, which is motivated among other things by supersymmetry. So what I wanna do in the next lecture is take you through the calculation of one, the calculation of how you get the right dark matter abundance. First, in this thermal regime, where you're looking at fairly heavy thermal dark matter, and it turns out to be if you have the right annihilation rate in the standard model particles, you can get the right abundance fairly easily, and that gives you a prediction for the annihilation rate to search for. And then I wanna talk about sort of classic example of dark matter down in this light bosonic region of parameter space, which is where axioms and axion-like particles live. And I wanna talk about how you get the right abundance in both of those. And then later we will move on to the discussion of how you actually search for dark matter of this type. But basically everything that I tell you from here is going to be ideas for possible ways to get dark matter, for possible ways to look for them, for searches, for interactions that may or may not actually be there. This as far as I know is basically where the current status of our knowledge is, plus upper bounds on a lot of processes. Thanks very much. Thank you. Oh.