 All right. So, we left off with this, in some sense, terrifyingly large region of allowed perimeter space for dark matter. And what we're going to do now is try to narrow that down a little bit, but at the expense of making some assumptions about the dark matter. So, in particular, we're going to now start looking at dark matter in the cosmological framework. So, in the early universe, we're going to say that the dark matter can interact with standard model particles. Oops. So, I'm going to label my dark matter with a chi, and I'll label my standard model particles with x's. Hopefully that won't get confusing. And I'm going to remain reasonably model independent about just how these interactions occur. So, I'm going to denote that with a blob. So, I can have several ways that this can happen. So, I can either have two dark matter particles annihilating into the standard model. And if this process is in equilibrium in the early universe, then the forward reaction is going to be happening at an equal rate to the backward reaction. But I can also have, you know, so if this diagram is allowed, then I can draw this diagram as well, where a single dark matter particle scatters off of a standard model particle elastically, essentially. So, I'm going to refer to this annihilation diagram as being inelastic, essentially because it changes the number of density of the dark matter particles. And this one here as being elastic because it is not changing the number density of the dark matter. To get estimate, essentially, the rate of each of these interactions, for the inelastic process, the rate is roughly going to be the number density of finding another dark matter particle times the velocity averaged cross-section. So, this number density here is coming from the fact that the dark matter has to find a partner with in order for this interaction to occur. For the elastic process, the rate is going to be the number density of the standard model particle times the cross-section. The difference between this and this is very important because it's essentially what's driving the difference between the rates of these two processes. So, in particular, the dark matter, non-relativistic, and the standard model particle we're assuming is relativistic. Therefore, the number densities associated with a non-relativistic particle versus a relativistic particle are different. And so, this is going to scale as e to the three-halves, temperature to the three-halves, e to the minus dark matter mass over temperature, sigma v. And this scales as temperature cubed, sigma v. So, again, the difference here is coming entirely from whether or not the particle is relativistic or not. You can derive that yourself. You just want an exercise to do. So, the number density is proportionally integral of the face space density over momentum. And if you put in the relativistic form for f, you get t cubed. And if you put in the non-relativistic form for f, you get this here. The elastic process here is going to essentially stop when the rate for the forward interaction becomes comparable to the Hubble rate. Because at that point in time, it's not going to be feasible for this dark matter particle to find its friend, because they're moving apart too quickly, so it can't find its friend. So, essentially, the forward process shuts off. So, what we want to compare is this rate to the Hubble rate H. And the temperature at which that occurs is called, well, is the freeze-out temperature. It's essentially the time at which the forward annihilation rate stops. At the elastic scattering case, it will shut off at the same point in time as well when its rate is going to be comparable to the Hubble rate for the same exact reason as the inelastic case. So, if we were to find out, if we wanted to sort of figure out which is going to happen first, well, and it guesses us to which is going to happen first, yell it out. So, think about how each of these is scaling with temperature, right? And so, as time progresses, temperature is cooling. So, which one is going to be, which rate is going to be falling off faster, either inelastic or elastic? So, when you take a minute, you can sort of scribble on a piece of paper, maybe talk with whosoever next door to you, and try to figure out which one of these is going to shut off first as the temperature cools and time increases. You guys ready to take a vote now? Votes for inelastic, shutting off first? Okay, good. Well, there better be 90% of people than voting for elastic. So, who's going to vote for elastic? All right. So, some people have voted twice, and most people have not voted at all. Okay, so, I will give another hint, and then we'll try again. So, this rate here is falling off as temperature to 3 halves e to the minus mx over temperature, right? And what we want to do is compare this with temperature cubed. So, the question is, which is going to get smaller faster? Is it this one or is it this one? Well, okay. So, fair enough point. Yes, we're assuming that the mass scale is not the macho scale. Sure, yes. So, if you make the assumption that the mass is much smaller than the temperature. All right. You're ready to vote now? All right. So, votes for elastic, this one, happening first. Votes for inelastic happening first? All right. We'll call that a win. So, the still is not 100% hands. But, yeah. So, what's going to happen is this is exponentially falling. So, it's going to become order hubble faster than the elastic rate will become order hubble. All because of the scaling here. So, this becomes order hubble first. Well, before gamma elastic becomes order hubble. So, that means that the annihilation of the dark matter is going to shut off first, but that this scattering process is still allowed to continue. And then after a certain amount of time, then this process will also shut off. Yeah. So, that's really important. So, let me just say it one more time because this is really going to play a role in the calculation we're just about to do. So, in the very early universe, you have both of these processes that are allowed, annihilation and elastic scattering. They're both in equilibrium with the standard model thermal bath. Then, because the dark matter here is annihilating with a nonrelativistic particle, this becomes, this rate, interaction rate, becomes order hubble first. So, the dark matter particle can't find its partner and will not be able to annihilate. So, this process shuts off. But even though this process shuts off, this one continues. And the dark matter continues scattering off of the standard model particles in the photon bath. But then, after some amount of time, this process also shuts off. And at that point in time, the dark matter is essentially decoupled from everything. When the dark matter decouples from everything, which is essentially when it stops scattering elastically as well, and it starts free streaming towards us, that essentially ends up setting the smallest scale allowed for the structures of the dark matter can form. Kinetic decoupling and where the kinetic decoupling is just the terminology for elastic scattering shuts off, then the dark matter is essentially free streaming until today. And the, it's wavelength at this point of time is essentially going to set the scale for the smallest structures that we observe. We can look at things like the matter, the dark matter power spectrum to learn what this scale is here. I'm going to try sketching this up on the board, but it won't be as good as what's ever in the notes. So, you should look there for the details. But these measurements are done from using Lyman alpha from high redshift quasars. And what they get is essentially the power associated with the given wave vector for the dark matter. And the data looks something like this. It's certainly not going to be as nice as what's in the plot, but so each line here corresponds to the power spectrum at a different redshift. So, for example, this line here would be redshift of quasars at redshift of z equals 2.2. And then as I move further up, up here we have redshifts of z of 5. So, each line is a different redshift going all the way up to redshifts of z of 5.4. And so, this is telling us the amount of power that we see in a given wave vector as a function of redshift. And if we plot over this, the expectation for different kinds of dark matter particles we'll get is the following. So, is what I'm going to call cold dark matter and blue. Can you distinguish between them? No. All right. I'll pick a better color. Do green. So, green is warm dark matter. And I'm going to come back to exactly what I mean by cold and warm in a second. But for now, just take these as being two different kinds of dark matter. And so, the expectations I'm going to put overlay over the white lines. So, the white lines is what we observe. And then the green and the purple is the prediction. So, for cold dark matter and warm dark matter, they're both fairly consistent with the observations at lower redshift. Essentially, they mean it just kind of overlays the observation. But where we start seeing differences is in the high redshift. So, at high redshift, cold dark matter follows observation very well and warm dark matter doesn't. So, here at these high redshifts, we see that warm dark matter does not reproduce the observations. All right. So, what do I mean by warming cold? So, cold dark matter has the inelastic process. Well, the dark matter is nonrelativistic at freeze out. And the elastic scattering or kinetic decoupling happens much later, much later than when chemical equilibrium stops. So, for cold dark matter, coming back to our diagrams, for cold dark matter, this process shuts off and then it's some time before this one shuts off. Warm dark matter, the time difference between these two processes shutting off is smaller. So, this one still shuts off first, but now you don't wait as much time before the elastic process shuts off. I guess I'll write that here. So, for warm dark matter, kinetic decoupling happens sooner. So, CDM here is just shorthand for cold dark matter. The net result of the kinetic decoupling happening sooner is that the free streaming length is longer for warm dark matter and so you end up washing out structure at small scales. So, for warm dark matter, a general prediction for warm dark matter is you have less structure on small scales relative to cold dark matter, which we can see explicitly from this here. So, if we look at the curves, the power spectra for high redshift, the warm dark matter prediction, which is in green, cuts off before the prediction for cold dark matter, which is in purple. That essentially means that you have, that warm dark matter is cutting off at a smaller scale than the cold dark matter. And so, the power spectra here from the measurements from quasars actually allow us to measure this and what we find is actually that the data is more consistent with the cold picture. And in terms of actual numbers, this ends up restricting the dark matter mass to be greater than roughly, let me give you the exact number, 3.3 keV. So, dark matter masses have to be larger than 3.3 keV in order to be consistent with the power spectra observations. So, not all warm dark matter masses are excluded by this, but it's starting to get a little bit tight and this is, this is what the actual constraint is in terms of mass. I'm going to focus on now for the rest of this session is how we actually predict the dark matter density today if we make the assumption that the dark matter is cold, where the cold again means that it's totally not relativistic at chemical decoupling and that elastic decoupling happens much further down the road. Okay, so this calculation is referred to as a freeze out calculation, sort of a classic calculation that you do for dark matter. I'm going to outline it here and then all of the details are in the notes, but essentially our goal is going to be to calculate the abundance of the cold dark matter today. In order to do this, we need to use the Boltzmann equation, which relates, this is called the Liouville operator, this is the collision operator. Liouville operator tells us how the number density of dark matter changes with time and in particular you can show explicitly that it's equal to the change in the number density with, so the differential, the number density with T and then a term here that accounts for the fact that the number density is also changing due to the expansion of the universe. So there's two terms, but these in some end up giving you the total change in the number density. The collision operator tells you how the number density is affected by collisions of the dark matter particles either with itself or with standard model. So it's usually somewhat complicated looking, but at its heart that's all it's doing, it's just telling you that it's encapsulating all of the changes to the overall number density from these collisions. So let me write down what the collision operator is for the case of a standard two to two scattering process. I'm going to make the assumption that I have two initial state particles, I'm going to call them one and two in equilibrium with two final state particles, let's call them three and four. So I'm just trying to model this reaction here. So this is particle one, particle two, three and four. So what the collision term is in this case, like I said this is going to be somewhat long, but I'm going to write it out because it's quite important and then we're going to just break it down bit by bit. So I get phase space factors that tell me about the phase space of the initial and final state particles in the reaction and these phase space factors have to be multiplied by the amplitude of the scattering process. So this is the case for the forward reaction where I have particle one, particle two interacting to give me three and four in the final state and that's the amplitude for that process. And then I have also the case for the reverse interaction. So this is where three and four interact to give me one and two. This all gets multiplied by a momentum conservation delta function. So P one plus P two minus P three minus P four. That has to be satisfied in order for energy momentum to be conserved. And then the whole thing gets multiplied by phase space factors where my notation is that d pi i is equal to d cubed pi over two pi cubed twice the energy of a given particle. So it's kind of long, but it actually ends up simplifying quite a bit, which is fortunate for us. Otherwise this would be pretty awful. But let me just make a few comments. So notice here that I have to put a one plus or minus the phase space factor. That accounts for the fact that you get if you're scattering into a fermionic particles you get a suppression and if you scatter into bosonic particles you get an enhancement. So the plus here is if you scatter into bosonic particles and the minus is if it's into fermionic. So this whole thing simplifies. If I can make the following sets of assumptions, which are all correct for our case, is that the dark matter still remains in kinetic equilibrium, which I've already argued is true for cold dark matter. That's just the statement that at the time that it falls out of chemical equilibrium this process is still allowed and still ongoing. Because this is still ongoing it means that I can write down the F's here. The phase space distributions are well quantified and they're just given by the Fermi Dirac distributions. So the fact that kinetic equilibrium is ongoing is crucial. This whole calculation would be a lot harder if we couldn't say that the F's were just given by the Fermi Dirac equations. The second assumption is that the temperature is much smaller than energy minus chemical potential. This assumption allows us to say, I'm sorry, no it actually doesn't care and here's why. Once I make this assumption here it means that I can take the F's as just being Maxwell Boltzmann so I've now simplified this even further. Now it doesn't care at all whether or not I've got fermions or bosons and so in particular that means that my factors of 1 plus or minus F here are just equal to 1. So that's nice, simplifies. And the third assumption that I make is that any particle that's from the standard model, so all of my X's, are in thermal equilibrium with the photon bath. So once I make these sets of assumptions this ends up boiling down to something that's way more manageable and actually it'll boil down to an equation that we can actually make some order of magnitude assumptions about on the board which is convenient. So let's see, I'm going to skip a few steps and I'll just tell you what we end up getting at the end of the day. All of the intermediate steps are detailed in the notes that are on the website but when everything boils down after applying those assumptions I can write the Boltzmann equation like this. So this term here is coming from the Liouville term so it's summarizing the total change in the number density. This is just the change of time and this comes from the change due to the acceleration of the universe and then on the right is the change due to the actual interactions of the particles. 1 and 2 remember are the initial state particles 3 and 4 are the final state ones. Yes, physically, well physically it means that we're dealing with a non-degenerate gas. So where the gas of particles doesn't care at all whether or not it's comprised of fermions or bosons. I can move it that way. The Fermi statistics are just not important. This combination because when I write down the, right, so the Fermi Dirac distribution has terms that are here EI minus MuI over Ti, right? So in normal order for me to be able to ignore, so all of the Fermi statistics is essentially coming from the presence of this term. This term is being treated, is different depending on whether I have boson or fermion. But if this term, well it's, sorry, it's coming from the fact that I've got 1 plus or minus, right? So if I can take this as being small, it allows me to just write this as a Maxwell-Boltzmann distribution. So E to the minus EI minus MuI over Ti. So by making this assumption I essentially get rid of this part here, the 1 plus or minus. Yeah? This is at, at freeze out, yes. Excellent. All right, so we have our, essentially our equation here that's describing how the number density of the dark matter evolves with time. I'm going to do some more manipulations to kind of get this to an even simpler form. So one thing that's convenient to do is to redefine, so this number density changes as, with the expansion of the universe. And so it's convenient to redefine it in terms of a new variable Y, where what I've done here is integrated out the entropy density. By doing this I essentially change this so that when I talk about Y, Y doesn't change as with the expansion of the universe. So it's just, it's a convenient redefinition of the number density. So we're going to start talking about the quantities Y rather than the quantities N. And when I do this redefinition, kind of plug everything in, I get the following. And so this is the equation that I now want to spend some time discussing the implications of. I've also made an additional redefinition of X as being mass over temperature. So there's a couple of steps going from here to here that involve just essentially redefining variables, but you can think of X as just, I mean it's essentially a time variable and Y is just the number density that comes from scaling out the change from the expansion of the universe. So I get an equation then at the end of the day when everything boils down that looks like this. This equation does not actually have an analytic solution, which is unfortunate, but you can get exact solutions just numerically, but we can get some mileage by making some assumptions and just kind of considering what happens in certain limits. So that's what I'm going to do at the board now just so that we can understand what the implications of this are. Alright, so what we get, let me just make sure that I'm, so we start off with this and make some assumptions, which is that I can expand out the cross section there as being some constant in time, this where I pull out the time debentance as X to the minus n, and that I can do the same thing with the entropy. So I can write, if I make these substitutions into the expression that I have there, I can rewrite it as, oh I'm going to, I know I'm going to get my, oh here we go, good, that way I don't get my factors confused. I have it in this form where I'm taking lambda just for simplicity to be a constant. I can integrate and get an answer, and the answer is actually very simple. So 1 over the number density of the dark matter today minus 1 over the number density at the time of freeze out is just equal to lambda over the freeze out time. The number density today is going to be a little bit smaller than the number density at freeze out, so I can approximate this even further by ignoring this piece here, and what I get then is that the number density today is approximately X F over lambda. Now let's take stock of what we've done. So we're starting off by writing down the Boltzmann equation to describe this inelastic process here, so that we can see what happens to the number density of the dark matter after freeze out. The Boltzmann equation essentially encodes all of the dynamics of that process. We simplified it down, solved it out, and what we get is an estimate of the dark matter number density today assuming, this assumes again, this is important, this is assuming that the dark matter was in thermal equilibrium in the early universe, so that this process was actually in equilibrium in the early universe. I can turn this into numbers that we all understand because at this stage it looks like symbols and what does it actually mean. I can write down the number density of the dark matter is the mass times the entropy today times y today divided by the critical density, and if I substitute in all the appropriate numbers, I find that the number density today is roughly the following, and this is essentially the main point of this lecture, written out right here. So by solving the Boltzmann equation and evolving the number density of the dark matter with time, we can get a prediction for what its density should be today, which I've written out here in terms of the annihilation cross section for the dark matter, and to get from this to this, I've just made the assumption that the annihilation cross section scales as the coupling squared over the mass of the dark matter squared, and then I've just written it suggestively here in terms of the alphas and the m chi's. What are the important numbers? Well, we know what this is from measurements of CMB measurements, so W map and Planck give us this number. The number that we get from W map and Planck is that omega chi h squared is roughly 0.1. Let me get you all of the numbers. Do I have it written down here? Oh, I don't have it written down. Well, it's roughly 0.1. Put it till that because it doesn't include the error bars, but it's roughly 0.1. So this is coming from Planck and W map measurements. And if we look here at how I've suggestively written this out, we see that we can get the right number density today if we make the assumption that the coupling is 0.01, which is weak scale, and that the mass is approximately 100 GV, again weak scale mass. The fact that making the assumption that the dark matter is weak scale, the fact that that ends up giving us the correct abundance as measured by Planck and W map is what sometimes referred to as the WIMP miracle. The WIMP means weakly interacting massive particle, and the fact that it gives us the right abundance is called a miracle. And I'm stressing up how important this has been because it's been a dominant paradigm for dark matter for like the last 20 or 30 years. It's essentially said if you're going to say I'm going to start looking under the lamppost for dark matter, the lamppost has been set by the WIMP miracle. So most, the vast majority of the experiments searching for dark matter assume that it is weak scale with these sets of couplings and this mass. Really dominated by the fact that you sort of vary, you very naturally end up getting the dark matter density when you make this assumption. Now, the question I want to address in the last few minutes is just how much of a miracle is this? I mean the numbers work out, so that's great. But is it really that special? The answer is no. Actually getting constrained here is the ratio of alpha squared and m chi squared. So, I can technically change this ratio around and so long as I change it consistently, still end up getting the correct dark matter abundance prediction. So, when we say the miracle, we've essentially just taken one particular slice where things tend to work, but I can really just change this however I want and I can still end up getting the right abundance. And it turns out that there's actually a variety of models where you can get the right thermal relic and not have weak scale dark matter. So, I'll list a couple of these here just for completeness, but one class is called WIMP List Dark Matter. And so, these types of models, you vary alpha squared, the ratio of alpha squared to m chi squared such that you still get the correct abundance. And you can take the mass of the dark matter to be really light or really heavy so long as everything is self consistent. There's a very broad class of scenarios called Forbidden Dark Matter that gives you, can give you thermal relics with masses down to KEV. And so, for these kinds of scenarios, you get everything you get here for the WIMP miracle except a very different mass. There's another scenario called SIMPS or Strongly Interacting Massive Particles. You assume that instead of the interactions being two to two, which is what we've done here because we've always been looking at two dark matter coming in and two standard model coming out. And for these cases, you assume that the interactions are three to two, so three dark matter coming in. And if you make this change, then you can end up getting MEV scale dark matter, that's thermal, but strongly interacting. So, it gives very different sets of observables than what you would expect for the weekly interacting picture. The examples here are just three variations that I've made on the thermal relic calculation. And in each of these cases, I make a change and I can still end up nailing the correct observation as seen by Planck and WMAP, but the dark matter mass is no longer 100 GB and no longer is that weekly interacting. So, one thing to keep in mind is that despite the fact that most of the conversation these days around dark matter sort of centers around this WIMP scenario, you don't have to do much to change the picture and still end up getting the, you know, an abundance for dark matter that's consistent with what we actually observe. So again, this is something to, you know, that is becoming very important right now as we, as experiments for WIMPs continuing and get really good sensitivity and with no observations, then we really start thinking very carefully about these alternative scenarios. So, when we come back after lunch, that's what I want to show. We're going to discuss how we've actually looked for WIMPs and what the results are and then discuss some of the implications of that for our interpretations of the models. All right. So, enjoy lunch and see you afterwards.