 And yesterday, we saw a collection of evidence about dark matter at different length scales. I review how we got convinced over the last decades that dark matter must exist. And near the end, I made a list of properties that every particle dark matter model must satisfy. So let's take it from there. And let me remind you a few important facts. And OK, so I was a bit fast at the end yesterday, so I want to review what these constraints are and also mention a couple that I didn't mention yesterday. So the first one is the density. And this will be very important for the lecture of today because we will go back in time in the early universe. We will consider some dark matter models, depending on the time and on the questions. Of course, questions are encoded. So let's try to have an interactive lecture. But in the end, we will compute Relic Densities today. So the topic of today's lecture is Calculation of Relic Densities. So we always have to face this number and try to reproduce that. So I say that the density of dark matter, which we like to explain as a ratio, as a dimensionless ratio between the density itself and the critical density, that was defined yesterday not only by me, but also by other lectures. This ratio is 27%. And there is a slightly different way to define this quantity that I will be using. It's maybe more common in particle physics, but it's a very convenient way. And let me just introduce the way. So instead of using omega, I'm going to use the parameter xi. xi, these are definitions, is defined as the ratio between the dark matter density and the entropy density. So the entropy density is the entropy density of the universe, which scales like the cube of the temperature. I will give you an expression for this later. But why this variable is nice? This variable is nice because after you make the dark matter, after you produce the dark matter the way we will see today, this quantity is constant over time. So if it's constant, the value we compute in the early universe is the same value we have to face today. So it will just make an easier comparison between what we compute in the universe and what we see today. So this is not a dimensionless number. It's a dimensionful parameter because in units where h bar and the speed of light are equal to 1, an energy density has mass dimension 4, and the entropy density has mass dimension 3. So this ratio has the dimension of an energy in natural units. And it is 0.44 electron volt. So that's the number we will try to reproduce today with our Relic density calculations. OK, so again, we mentioned yesterday that it has to be very stable. What this very means depends if it's unstable in the decay channel. Something I didn't mention yesterday about somebody pointed out after the lecture. Of course, dark matter must be neutral. Neutral under E and M. OK? So the coupling to photons must be very weak. There are several reasons for that. First, because we don't see it, if it was able to emit photons, we would be able to see that. Also, because as we saw this morning, the coupling between dark matter and the photons in the early universe cannot be too strong. Otherwise, perturbation will not be able to grow until recombination. So there are bounds that we can discuss in more details later if you're interested. Then what else? Stable, neutral. Oh, very important for today. Cold. This is something we will see later. But as I said, these constraints translate into the requirement that when the temperature of the universe was 1 kV, dark matter must behave as a pressure-less fluid, OK? Cold, stable. Ah, OK, last thing we didn't discuss yesterday is what about the mass? So we went through a review of all the astronomical and cosmological evidence for dark matter. But all of these observations do not tell us anything about the mass. The mass, we really have a wide window available for particle physics candidates. And there are bounds that it's worth mentioning. If the dark matter particle is a boson, so it has an integer spin, for example, the axion, OK, if you know what it is, then the mass has to be bigger than this number, OK? So why do we get this limit? We get this limit because the Compton wavelength of a particle with this mass will be larger than the smallest object we observe. So we will not be able to pack dark matter particle inside the smallest object we observe today in the sky. The reason analogous limit for fermions? For fermions, the limit is way more severe. So the mass has to be much, much bigger. And the reason is that because there is polyblocking, so it's more difficult to pack fermions because of the exclusion principle of poly. So we already have some limits. These are not very strong. You see that for comparison, WIMPs that are very popular dark matter candidates, which we'll review today, they have a mass very broadly speaking in this range between 10 giga-electron volt and 10 teroelectron volt. Is there a question? This one. OK, so this limit comes from the observation of objects in the sky, which are mostly made of dark matter. And these objects have a size because we observe that. Then a particle with this mass has an associated component wavelength, which, unless you satisfy this bound, it will be bigger than the object we see. And so it's in contradiction with observation. It's not possible to confine the particles within this object. For the fermion, it's bigger because the limit is higher because you go through an analogous consideration, but then you have to feel the Fermi sphere. So you cannot put all the fermions in the fundamental state because they are fermion. There is a poly exclusion principle. So you have to populate all the lower energy state, and then you have the Fermi volume. And again, you compare the volume with the size of the object you see. Yes? So I think this is for dwarf galaxies. So these are smaller than the Milky Way, but they're mostly made of dark matter. So it definitely has to fit within a dwarf galaxy. Other questions? OK. Yeah. It can be charged under QCD, then the limit strongly depends on the mass. So if it's very heavy, it could be charged under QCD. Yes. Then the production is a bit tricky to compute because, but yeah, there are mass windows where you can have a color relic. Well, also this strongly depends on the mass because all the bounds you put are sensitive to the number density, but we measure today the mass density. So if the dark matter particle is heavier, you have less of them around because rho is fixed, but rho is m times n. So m bigger and smaller. What do you mean, sir? The color must be confined. Oh no, they confine. They confine. But they can, I mean, today they will be confined today. But in the early universe, before the QCD transition, then there will be like work and gluons where they confine in the early universe. But then, of course, as the universe cools down, they form bound states. Yes. Yes, we will not see that in great details. That's a calculation of a hot relic that I don't plan to go through with too many details because, as you say, it's ruled out. I leave it as a homework. Yes, yes. But also the number density. If you go through the relic density calculation, you see that they are too light to account for the dark matter today. No, no, but we know that the standard model neutrinos have to be lighter. Oh, you mean neutrinos in general? Yeah, yeah, this is true for any fermion, any fermion. Gravitino, it's 3 half spin. It's not 1 half. Same story. OK. This looks very strong. It is very strong. It is very strong. That's few kV in mass. The sterile neutrinos, it's few kV, the interesting mass region. Well, sterile neutrinos as dark matter candidates or sterile neutrinos as sterile neutrinos. It's different. Of course. So this is a limit that comes from observation of objects made of dark matter. So there is nothing wrong with having a sterile neutrinos at 1 EV if it's not cosmological abundant today. So we'll get to the up. You're asking if there is an upper limit. We'll get to that. There are upper limits, but they have big caveats. These are very solid. So I will show an upper limit at the end of today's lecture. But, oh, I will, sorry, it's a broad class of candidates. I will define that in a second. I will apologize. I will give a definition. Good. OK. So now the topic of today's lecture is, as we saw this morning and yesterday, and as I will see a lot of times this week, at very early times, at very hot temperatures, the universe was homogeneous, isotropic, at least all the standard model matter, photons, electrons, quarks, they were in thermal equilibrium. And the dark matter must come from somewhere. To end up today with a universe that is, as a dark matter abundance, five times larger than the one in variance, we need to understand how we produce this amount of dark matter in the universe that then generated the perturbation that then evolved into the structure where we live today. OK, so the landscape of theoretical models is huge. There are many models. So I will try to give a very broad classifications that it will just work for the following reason. So according to this definition, there are two main ways to produce dark matter in the early universe. The first one is thermal production, and it's defined in the following way. The dark matter particle was in thermal equilibrium. So I just say that at early times, hot temperatures, electrons, photons, everything was sharing the temperature. They were interacting efficiently enough that they were in thermal equilibrium. In this first category, the dark matter particle was in thermal equilibrium with them. And there's a big end. The other important feature, by studying the details of how we went from a thermalized situation where the dark matter was in thermal equilibrium to the universe today where nothing is in thermal equilibrium anymore, the process of the particle from thermal equilibrium is the one that sets the real density. So that's the definition of the thermal production. And that's the first one. So I say I divide into two different ways. The second way is if the first one was thermal, the other one will be non-thermal. And non-thermal is anything else. So at least it's not an ambiguous classification. You either are in the first or in the second. So the goal of today is to go through this one very, very well. Not too carefully, because the actual very good calculation is by solving Boltzmann equation numerically on the computer. What we will do today are analytical estimates to see how the real density depends on the mass of the dark matter, on the cross-section, on the particle physics parameters. But we will mostly focus on this. So that's the goal to understand this. About non-thermal production, I have a couple of examples that I can give you at the end of the lecture if there is time. But the fourth lecture, so the one on Friday, will be entirely on axioms. And so on Friday, we will see for sure one example of non-thermal production, the one for the axiom. So let's, is it clear the distinction? OK, so let's study thermal production. So the first question we need to answer is, how do we thermalize? So I said that in this, yes. I didn't hear well, sorry. Yes. Oh, so this is the first one. Because the only requirement here is that at some point back in the day, back in the time when the universe was very young, the dark matter was in thermal equilibrium. But then we know it's not anymore today. So there is a departure from thermal equilibrium, as you were saying. But the crucial thing, as we will see, is that this departure from thermal equilibrium is the process responsible for setting the rally density, in a way that I will explain. Thermal-ish? Well, it may be then more, if I understand what you're saying, it may belong more to this one. Yeah, freezing is here. Freezing is, so freezing is a situation where the dark matter is actually never in thermal equilibrium, but it's produced via scattering of particles that are in thermal equilibrium. So freezing is what I plan to do in the last 50 minutes if there is time. Otherwise, you can come to me and ask after the lecture. OK. So how do we thermalize? So let me introduce some notation. For the rest of this lecture, the dark matter particle will be chi, just a Greek letter that is good for a dark matter candidate. It doesn't mean it's a fermion. I know it's used, it's conventional to call fermions chi sometimes, but this is just a generic particle. And the only way we can get and we can thermalize in the universe is if we have these processes where q-dark matter particles find each other and they annihilate into q-standard model particles and the back reaction. Here by sm, I mean any standard model particle. And so on. So it can be an electron, it can be a quark, it can be a photon, it can be anything. So if these reactions are efficient, then the dark matter particles are in equilibrium with sm that we know are in thermal equilibrium. So what does it mean being efficient? So let's try to do the following estimate. Let's compute how many of these reactions happen in some time interval between t1 and t2, where t is just the age of the universe. So this number, just by definition of the quantity I'm about to write, is just the interval over time of the rate. Gamma is the rate for this process here. And it's defined as the number of interactions per unit time. So if you have interaction over time, you integrate over the time, then you find the total number of interactions. This is just the definition of gamma, if you want. And this is something we can compute with. If we know the interaction of these particles, we know the Lagrangian, as it's called, in QFT. So for each model, we can compute that and find the precise value. So let me change variables here. So it is more convenient to use temperature as a variable. And let me also write this. So t dot is the time derivative of gamma t over t dot dt over t. OK. Now, if you remember, maybe it was mentioned yesterday, but otherwise, I will tell you now, the entropy of the universe, the entropy of the universe is conserved through the expansion. So s is the entropy density, entropy per unit volume. So this product, s times the scale factor cube, is constant unless you have some entropy injection which we're not considering here. So we also know that s is proportional to t cube. This is a result that you can derive in statistical mechanics. Yes, it's not only belonging to cosmology. You take a Fermi Dirac or both sense and distribution. You compute pressure. You compute energy density. You compute the entropy density. It scales like t cube. So you put these two things together and you find that t times a is constant through the expansion. This is not precisely right. There is a g star factor, if you know what it is, which is the number of relativistic degrees of freedom. But it's good enough for our purposes, OK? There are tiny corrections to this relation. But this is useful because I leave it as a homework. You can check that t dot over t is minus the hubble, which it was defined already yesterday, but it's the derivative time derivative of the scale factor over the scale factor itself, OK? OK, so now I go back to this equation. And I take care of this minus sign. So now I switch from t2 to t1. And it's gamma over h dt over t, OK? OK, now remember that I defined t1 being less than t2 because I was considering t1 as the initial time, t2 as the final time. The universe cools down. So t1 must be bigger than t2, OK? If I consider universe at the earlier time, it was hotter, OK? So this interval is positive, which is a good check. And I put the minus sign. I switched from t1 to t2 to t2 to t1. So this is bigger than 0, as it should be. It's a number of interactions. And now let's do an naive estimate. And let's take t1 equal to t2, OK? So I take the case where the ratio between temperatures is a factor of 2, which is also the ratio between the scale factors. So this is a time interval where the universe doubles its size. Well, the volume is actually 8 times bigger because it goes like the scale factor q. So I'm taking a time interval where the scale factor gets twice bigger. And if I assume that gamma over h within this interval doesn't change much, I can do the interval just by making an approximation. Gamma over h is constant. Log of 2 is just the log of, I'm just doing the derivative of dt over t. So I get the log, OK? So log of 2 is more or less 1. So what we derive is that within the time the universe doubles its size, the number of interaction is given by gamma over h up to order 1 factors, OK? So this is a very useful result because it tells us that in order to establish whether we achieve thermal equilibrium or not, we have to compare two quantities, gamma and h, OK? If gamma is bigger than h, we are in thermal equilibrium. If gamma is lower, is smaller than h, we are not. Because we do not even collide once, OK? Of course, this is a qualitative criterion. If you want to study thermalization well, you have to solve Boltzmann equation. But this is good enough for our purpose, OK? So it also makes sense because if you think about that, there are two competing effects. So there is this reaction here, scattering of dark matter particles to some other particles, they want to thermalize, OK? They want to achieve thermal equilibrium. But the expansion doesn't because the expansion is diluting the universe and is making very hard for the dark matter particle to find each other, OK? So there are two competing effects. And each one of them has a timescale associated with that because the inverse rate is the timescale it takes to get to an interaction. And the outball rate is also a timescale. And so you compare these two timescales. And if the timescale for interaction is shorter than the one for the expansion, then you're in thermal equilibrium. If not, you are not, OK? So the first thing to check when you have a dark matter model is that, well, do I ever get to the point where gamma is bigger than h? If yes, then we may belong to this class. I say we may because we also need to satisfy the second one, but at least we satisfy thermalization, OK? OK, so let's now see how the production works. So maybe I should delete here. OK, so let's assume that we are, is there any question first? OK, so let's assume that we are in a situation where gamma is actually bigger than h. So we start from a thermal distribution. So we know that at high enough temperature, the number density of chi is the number density of a species that is in thermal equilibrium, OK? So what is this expression? The expression depends. So I'll give you two expressions, actually. So if it's in thermal equilibrium, I can define a temperature because it's in thermal equilibrium. So don't worry about the numerical factor. This is a Riemann function, evaluated in 3 pi squared. G effective, it's 3 over 4 if it's a fermion, it's 1 if it's a boson, times the spin degeneracy. All you need to care about is that this goes like T cubed, OK? This corresponds to the case where the temperature is much, much bigger than the mass, OK? So this is a relativistic gas. A relativistic gas of particles, the number density must scale as T cubed. Because we know that the number density has mass dimension 3, the mass is light. All we have to make a dimension full quantity with mass dimension 3 is the temperature, OK? So that was easy to guess. And this other case is the famous Maxwell-Boltzmann distribution. I also put the 2 pi in the right place. But all you need to remember is that we get the cube, energy cube as m chi T, the 3 over 2. So this is dimension 2 inside the parentheses. Everything is 3. Plus, there is a very important exponential suppression, OK? This is the Maxwell-Boltzmann suppression. So in the rest of the lecture, I will probably drop many of the 2 pi's. Because as I say, to do things properly, you need to solve the Boltzmann equation in the computer. But we want to see the dependence on the density of the particle physics parameters. So the chemical would know. Good, OK, very good question. So we are making here the assumption that there is no chemical potential. So if you have a chemical potential, in the end, it's the same way you produce baryons. So in the early universe, you have quark and anti-quarks. But then you have a very strong annihilation cross-section. You wash out the symmetric component. And all you are left is the chemical potential part. So that qualifies as non-thermal production in the sense that the number density, even though they were in thermal equilibrium, the number density is set by the chemical potential. That it's something that came from somewhere else, OK? I will discuss that later as I go. So for now, no chemical potential. For now, no chemical potential. This is valid at high temperature because we are in thermal equilibrium. Now, if the dark matter particle is always in thermal equilibrium, all the way to the late universe, this will be very bad, because we start to enter the Maxwell-Boltzmann suppression. And then we end up with no dark matter around, OK? So unless you have a chemical potential, as it was just mentioned, you need to go away from thermal equilibrium. And that's the process we will study. So based on that estimate, and now let me introduce a term that is used very often in the field. The concept of freeze-out. So when you stop following this distribution, because you do not have any relations that are effective, the interactions are not able to keep everything in thermal equilibrium, then you reach the freeze-out point, OK? From that point, the number density is frozen. It's not constant because the university expands, but the processes that change the number of thermal particles are not happening anymore. And then it's in difficult level, OK? So this is the freeze-out. And the freeze-out is, there is a temperature associated to the freeze-out. And a naive estimate, then you solve the Boltzmann equation and you see that this works very well. When the interaction rate evaluated at this temperature is compared to the upper rate evaluated at the same temperature, you find that these two rates are equal. So that's the freeze-out point, OK? Which makes sense. Because when these two are equal, then you basically stop having a fission annihilation based on the estimate above, OK? So our goal here is, the calculation I want to perform here is the first one is to compute Tf, OK? So understand when the freeze-out happens. And the second one, which is even more important, is once we know, after freeze-out happens, everything is very boring, OK? The dark matter particles are there. They sit there. And the number density in a co-moving volume is frozen in the sense that the number density itself is not constant, but the only scales because of the dilution due to the expansion of the universe, OK? But what we want to do is to compute the rally density of dark matter and compare with this number that we observed today, OK? OK, so how does it work? So let's do the first calculation first, OK? So let's compute the freeze-out. So in order to compute the freeze-out, we have to solve this equation. We have to solve this equation. But to solve this equation, what we need to do is to get an expression for the rate, OK? So let's start from things we know. So we know Hubble, OK? Hubble from the Friedman equation is square root of rho over 3 m Planck square. So you may see this equation written also. Let me write that for you also as, so there are two ways to write this equation, either with the m Planck or with the g, the Newton constant. And they're the same. So you can take this as a definition of m Planck if you want, OK? Because g has mass dimension of energy to the minus 2, OK? So we know that for radiation, gas, rho, again, same dimensional analysis if you want. It goes like 2 to the fourth. If you think about the black body, for example, it's a photon gas, it's the energy density scales like the temperature to the fourth, OK? So you plug the number and this is what you get, OK? So Hubble scales like t squared over m Planck up to order 1 factors that will not affect our estimate. OK, so we know this. This is good. What we're missing is this, OK? What we're missing is this. So the rate, I'll give you the answer, the rate for processes like chi, chi going to sm, sm scales like the number density times the cross section times the relative velocity between the two particles in the initial state. And I put an average symbol here because once you have dark matter particles distributed thermally, you make an average over all the possible initial states, OK? You mediate over the initial states. But don't worry about the average, we will. Actually, let's do this. We will consider a situation where this number is a constant, OK? This happens in many models where if you do a partial wave expansion of the scattering amplitude, you have a contribution already a ds wave. This is just a relativistic expansion of waves. So let's just say that this is a constant, OK? So the only non-trivial temperature dependence enters in n, OK? Now, what expression we use for n? So I wrote you two expressions for n, OK? One for the particle when it's a relativistic, t cubed. And the other one when the particle is non-relativistic. So this pre-factor times an exponential suppression, OK? So well, the expression you use depends on what value of tf you find, OK? So you can try. And if tf is bigger than m chi, you have what they are called hot relics, OK? And if tf is less than the mass, then you have cold relics. So they were dot and cold. I hope they make sense. In the first case, you decouple when you have a temperature higher than your mass. So in some sense, you're hot, because you're moving very fast compared to the rest energy. And in the second case, it's the opposite, OK? So both cases are possible, and you can do the calculation for both cases. I will only do the one for cold relics. And there is a reason why I only do that. And the reason why I do the calculation only for this case is because, as we saw yesterday and as I repeated today, we have bounds from a structure formation that the dark matter must be called up to maybe can be warm, OK? But if the freeze out temperature is much higher than the mass, then it ends up being a hot relic. This is the first case would be the case for the standard model neutrino, by the way. So I guess people in the 80 were considering this option for the neutrino of the standard model to be a viable dark matter candidate. If you want to compute the real density, you have to do the calculation for a hot relic, OK? So you have to use this expression here. Now we know that it cannot be the case, but by the way, hot relics are still important because they can give you a contribution to dark radiation, they can give you a correction to an effective, the number of effective neutrinos as measured by CMB. So this calculation is something useful anyway, OK? But I want to focus on cold relics now. And if you want, you can try to do this calculation. And if you want to know details, you can ask me offline, OK? So cold relics, OK. So for cold relics, I want a dark matter particle that escapes thermal equilibrium at a temperature below its mass. So when I compare gamma with H, I have to use this expression, OK? So let's use this expression. And let me two pi factors. Sorry, I forgot sigma 0 here. So again, this is an assumption, but it's an assumption valid in many models. Sigma v is a constant. You can have models where this is not the case, but just to simplify the discussion. OK, so I'm comparing n sigma v to Hubble. And let me give you some number. So m-plank, the way I define it is this. 2.4 times 10 to the t in JV. And what we do not know in this equation is the value of m chi. We don't know the value of m chi. And we don't know the value of sigma 0, OK? So for each value of m chi and for each value of sigma 0, we can solve this equation. It's not something you can do analytically, because there is a power load, there is an exponential. You have to give the equation to a computer, but it's easy. It takes 30 seconds. There is perhaps a better way to write this equation by using a convenient variable, which is x. So you can think of the temperature as a time variable. It's like a clock. If you specify me the temperature of the universe, I can tell you how old the universe was. And here is the same. Instead of the temperature, use x, which is a dimensionless quantity, which is given by the ratio of the dark matter mass over t. Since the universe is cooling down, t goes down as the universe expands, so x grows. So if you go toward the positive direction of x, you're going forward in time. So now you already have a question for, and likewise, you can define xf. xf is the variable x evaluated at the freeze out. And since we are talking about cold relics, we want bigger than one. Otherwise, we are in contradiction with our assumption that the particle was escaping thermal equilibrium at the temperature lower than the mass. OK, so you do some math in this equation that I skip, that just give you the final equation. OK, so this is one of the most important equations of today. So this is a way to get the freeze out temperature expressed in terms of x as a function of the dark matter mass and the cross-section. So you see that the value of xf has a very weak dependence on the dark matter mass and on the dark matter cross-section because of this exponential. So you can rewrite this equation as, let's see, xf is 1 half log xf plus log of. So if you change the mass or sigma by a given amount, the sensitive on xf is only logarithmic, the effect. So we can try to plug some numbers. Let's plug some numbers. So let's take, and now I come to the wimp. So until now, I made no assumptions on the mass and on the size of the cross-section. Now let me focus on a broad class of dark matter candidates that are wimps. So wimps is an acronym. It stays for weekly interacting massive particles. So these are particles that they appear very naturally in extension of the standard model, which are aimed to solve other problems. So these theories like supersymmetry, extra dimensions, and many other ones, they don't want to provide us with a dark matter candidate. They are trying to solve another problem in the standard model, which is the hierarchy problem. But automatically, they provide a dark matter candidate. So you kill two birds with one stone. So you have one theory that solves two problems. So that's why they are particularly exciting, this type of candidate. So as I said before, this is the window of mass, broadly speaking. And the cross-section, this is the important number. So it has mass dimension minus 2. And it's given by a coupling square over a mass square times a phase space factor that I put just for to get a better estimate. So here, the coupling constant, I take it to be 10 to the minus 2. Typical values for weak processes, weak interactions. And the mass in the denominator, I take it 100 gV. This is the mass, roughly speaking, of the weak gauge bosons. So the Z is not mass 91, the W80 gV. So let's call it 100. We're just making an estimate. So if you plug this number here, and this is a log. So you can just count the exponent and then sum the exponent. I'll give you the result. For Wimps, you get XF approximately of 25. It can be 28, it can be 22, it can be 30, maybe. It cannot be 1,000. Why? Because the dependence on the cross-section on the mass is logarithmic. So to increase XF by a factor of 100, you have to increase the mass of the cross-section by a lot. So within this Wimp window, you get a very solid result for XF, which is given by more or less 25. And this is a good news, because we wanted this. Remember, we wanted the relic to be cold. So it's a consistency check, if you want. It gives us a confirmation that what we were doing was correct. So we succeeded in doing this. Any questions so far? It's clear? Good. So for cold relics, you quit thermal equilibrium at a temperature which is roughly speaking the mass over a factor of 25 or 30. Now, we need to perform the second task, which is the most important task, which is to compute the rally density. Because we always have to face this number. And we want to make sure that whatever model we are working on is able to reproduce this number. So there are many ways to compute rho. There are many ways. Let's go for a quick one. So what is rho? Rho, since these are cold relics, all the energy density is stored in the mass. So rho, actually let's call it rho chi, because we have chi. Is m chi times n chi? Mass number density. This is true all the time. Now, what is chi chi? Chi chi is, I have the definition here, is rho times the entropy density. So at temperatures below the freeze out temperature, I can write this equation in this way. So a is the scale factor. Since the denominator is always constant, s times a cubed is constant because the entropy in the commuting volume is conserved. The entropy density is not constant, but the entropy density times the scale factor cubed is constant. And also the number density times a cubed is constant after freeze out because there are no reactions happening anymore. And so the number density is depleting. It's value with the expansion. So n chi goes like a to the minus 3, 1 over the volume. So n chi times a cubed is constant. So this is constant. And this is constant. And it's also convenient. Since it's constant, let's evaluate it at a time that we can evaluate it easily. So the time which is the best time is the freeze out time. So if I evaluate this ratio at the freeze out, then I'm sure that after freeze out up to small corrections that you can only capture by solving the Boltzmann equation, this ratio is constant. So in particular, it's the one I measured today. So now the estimate is easy because this is just the definition of freeze out. So the number density at freeze out times the cross section is equal to Hubble at freeze out. Because of the definition, I define the freeze out this way. So when I focus on freeze out, I also have to specify the entropy density. So entropy density, you can give the full expression. This is the statistical mechanics result. There are pi factors. There are g-star factors. These are order of one factors. It's just the freeze out temperature cubed. So now we are done because xc kai, which is rho kai over s, both evaluated at freeze out. But I emphasize you do this at freeze out. It stays equal to its value forever. It's what? mkai, mkai freeze out, which is tf square over m Planck times sigma 0 divided by tf cube, which is the entropy density. So I can simplify this tf, tf here. I'm left to only one power. And I recognize that I have an xf factor. So xf is mkai over tf. Now we are basically done because let me write this equation again, which is important. And I want to write that in a different way. So I can take this equation as a result. For the cross-section, I need to produce the dark matter with the right value density. So xc, sorry, this is xc kai. If you want xc dm, so this is the one I measure. Now I put the number from the observation. So we saw from the previous calculation that xf is 25 up to logarithmic corrections in the mass and in the cross-section. So let's take it to be around 25. Then we have cd. I said the precise number is 0.44 electron volt. Let's say 1 electron volt. M-plank is 10 to the 18 gv. So it's 10 to the 27 electron volt. Now a miracle is about to happen here because you want to know what is the scale associated with the cross-section. So if we write the cross-section as in this way here, so let's do that. You have weak coupling, 32 pi. And let me call this m star. I don't know what this m is. I want to solve. This must be xf, which is 25, times, sorry, over 10 to the 27 ev squared. So I just put the numbers here. So if I take for the coupling constant, so now you can try to put the numbers. If you put for alpha 10 to the minus 2 and you put for the m the weak scale, you find that these numbers are in agreement. So if you go to Dark Matter Talks or Read Dark Matter article, this equation is also known as the WIMP miracle. So why people claim it's a miracle? They claim it's a miracle because we have reasons to expect new degrees of freedom at the weak scale. So we have reason to expect new particle with this mass and with this coupling for reasons independent on the dark matter. So this is the hierarchy problem in the standard model. And as I say, many framework like supersymmetry, extra dimensions, they naturally provide us with Dark Matter candidates. And it's amazing that if you do the calculation, and here you have no freedom, you know the coupling constant, you know the mass, you do the calculation, and you find that you reproduce the real density. So you have this particle for reasons independent on the dark matter. You compute the dark matter abundance and you find that the density is the same as the one you observe. Yes. Yes. Yes, yes, yes. So that's yes. So to do things properly and you can, you can put here xf as function of the mass and of sigma 0. And then you solve for sigma 0. So that's the proper way to do that. You remember xf depended on mkai and sigma 0. So you put a mass now and then you solve for sigma 0. You can totally do that. Now the reason why I put 25 is that I had an equation before there was 25 plus log corrections. So if you change this, if you change sigma 0 by a factor even of 100, xf doesn't change very much. So this is just for an estimate on the blackboard. So if you want to do the calculation, you have to do the calculation with the computer. But this 25 is very robust. If you change sigma by factors even of 100 or 1,000, 25 will stay 28, maybe 30, but it will not change very much. But well, that depends. That depends. Because you have, well, I will discuss that on Wednesday. Direct detection, it depends. Because you have to rotate the diagram. But maybe you annihilate to muons. So direct detection probes the coupling with nucleons. But you may have thermal production set by annihilation to muons, or to top quarks, or to tau's. Or you may have spin dependent cross-section in the direct detection. So we will go through this on Thursday. But it's model dependent. There are models where this is excluded. But going from the Rayleigh-Danzinger collection to the direct detection cross-section, it's a model dependent step to connect the cross-section. Because we are probing a different process. We will see more on Thursday. Oh, wow. OK. OK. So yeah, one more thing about the last question. I said that this was 25. The reason why I didn't care too much about 25 versus 30 is because here I have two numbers that they're very different from each other. So the Planck mass is 10 to the 27 electron volt. And the c is 1, OK? So if I have 25 or 50 here, I mean, the interesting thing is this equation is that I'm taking the product of two things very different. And in some sense, the geometric mean of these two things, because if I want to solve for m, I have to take the square root of this. So the geometric mean of two very different numbers give me the weak scale. So that's the wind miracle. I don't like this name. It's not a miracle, OK? It's just a coincidence. It's a coincidence. But as you see from this equation, this is an equation for sigma. So all the wind miracle tells you is that you need the sigma as big as the weak scale value. But if the dark matter mass is even 10 MeV, so much smaller than the weak scale, you still get the right value density, OK? So it's not really a miracle. It's remarkable that we get this coincidence between two big numbers, one small number, one big number, you take the geometric mean and you get the weak scale. But it doesn't point at the particle with mass, the weak scale, because you solve for sigma, not for m. OK, yes. Good, OK. So let's discuss this now. Let's discuss this now. OK, so yeah, let's do that now. So the question is, what about BBN? So there are constraints from BBN, which was actually the next point I wanted to make. So let me summarize here what we found so far. So what we found so far was the following. X, F, I can solve for XF. I have an equation. I choose m. I choose sigma. I solve. Oh, yes, yes, otherwise it's bad. OK, very good, thanks. Log of XF. Yeah, yeah, yeah, thank you. So if you plug the numbers we've been playing with so far, we say XF is 25. Then we have an equation that tells us that sigma 0, in order to reproduce the abundance, is the XF you get from solving the previous equation divided by c, d times and plunk. That was the other equation. OK, now this equation is telling us that sigma must be very close to weak-scale cross sections. They were a miracle. It doesn't tell us much about m, because m enters here. So let me emphasize this by putting the dependence on the mass and the cross-section on the XF. And so there is really no way this equation is telling me something about the mass, other than pointing at the value for the cross-section, which is a dimensionful parameter. But there is also a coupling constant inside the cross-section. So I can play with these values. Now, I claim that thermal relics, and this is precisely the BBN point, gMeB. So the mass have a mass bigger than 1 MeV. Why? If the mass is lower than 1 MeV, and these are thermal relics. So this is only valid for thermal relics. The axion is a fine, dark matter candidate. It's non-thermal. This limit doesn't apply. Why? Because if I have a thermal particle with mass below 1 MeV, I reach BBN. And then I have these ensemble or relativistic particles with a full t-cube abundance annihilated into some other particle. So there are two effects. I'm going to change the number of neutrinos. The effective number of neutrinos at BBN because I have new light stuff. And that's bad because we know that BBN tells us that there must be three neutrinos. There is some uncertainty, but if you add the Dirac fermion, Dirac fermion has four degrees of freedom. So tubularization, particle, antiparticle. So you're going to mess up the number of ineffective and neutrino, sorry, non-effective, the value of non-neutrino. The other issue is that these dark matter annihilation also dump energy into the plasmids so you can affect the way the elements are formed in the BBN processes. And so to stay, well, you can study these things in details, but to be safe, if you say that the mass is above 1 MeV, by the time you get to BBN, you are already in the Maxwell-Boltzmann tail of the distribution because it's e to the minus m over t. And so at that point, you can't forget them. You don't have to forget for the dark matter calculation, but you can forget for the BBN process. So this is an upper limit, lower limit. There is also an upper limit, which I mentioned quickly, and it follows from unitarity. So unitarity, and I'll just give you the result and you are more than welcome to ask me questions. So if you impose unitarity, which just means that when you compute probabilities, the sum of what probabilities must give you 1, there is a bound on the annihilation cross-section that has to be less than 4 pi over m chi squared. But we know that sigma 0 is kind of fixed from this equation. Sigma 0 has a precise number. So you can look at this equation as an upper limit on the mass because you just rewrite this equation as this. So somebody asked me before about upper limits. So this is one upper limit I know. And for thermorellics, it's approximately 100 TV. OK, so since we have five minutes left, there is no time to do freezing. But let me tell you something very important. I will tell you two things. So the first thing is that when you actually work on this and you write a paper, you don't do these estimates that I made on the blackboard. You do a serious calculations, which has to be done numerically because you have to solve a differential equation. This equation I'm about to write. So this is a differential equation where you solve for n chi as a function of time. And there is a better way, well, better way, more convenient way to write this equation in terms of dimensionless quantities. Why chi, which is n chi over the entropy density? So this is called the commuting number density. This is convenient because this variable scales out the effect of the expansion. So if you are away from thermal equilibrium, y is constant. And the other variable is this variable we have already seen before, which is a different time variable, is m over t. So once you change variable and you just use dimension, and the way you do that is, well, it's nicer to have the commuting number density because you don't worry about the expansion anymore. But also because when you give this equation to a computer to do numerical solutions, it's nicer to use dimensionless variables. And the equation you solved is this. So that's the equation you solved together, really. And you may have seen there is a famous plot that is in many textbooks. And again, if you go to a dark matter talk, many people start the talk with this plot. So let me explain to you what this plot means. So this is the equilibrium number density. The equilibrium number density for y fills the Maxwell Boltzmann suppression. So y is this n chi over s. We have seen that for the relativistic particles, n chi has a Maxwell Boltzmann factor exponential. So this is what you would have if you're always in equilibrium. If you solve the Boltzmann equation numerically, what you find is that at early times, you are in equilibrium, of course. But then you go away from thermal equilibrium. And this is precisely the 25 number we have seen today. So this is the freeze out. And this is the actual way you do the calculation. So you solve for y. And you look for this asymptotic value that stays constant under freeze out. That gives you the equilibrium number density today. And then in the last few minutes, this is something I already mentioned yesterday. We are able to extrapolate the thermal history of the universe only up to temperatures of 1 MeV, the time of Big Bang Nucleus synthesis. We don't know the energy content of the universe above BBN. We have no way to say what it was. And the simplest thing you can do, you extrapolate this picture you have from BBN to higher temperatures. But there is an assumption. And by Hubble, I mean the function H of T for T above T BBN, which is approximately 1 MeV. So all the interesting things that happened during today's lectures were above this temperature. I even brought that, that we want particle with masses above 1 MeV. So the calculations I sketched today is based on an assumption, which is the simplest thing you can do. And going away from this assumption requires some non-trivial extension of your cosmological history. And there are constraints. It's always easy to do that, but it's possible. So this is just to say that you may hear people saying that this model is excluded by real density, because this model fails to reproduce the real density. So you should not believe this, because if it is to explain the real density by performing the calculation in the simplest case, which is based on an assumption. But it's very easy to go beyond. And I also work on these things. So if you want to know more, you can ask me offline. OK, thank you.