 Thanks. So as promised, we go into more details in some specific mechanism for our framework, if we wish, for the dark matter production. And I argue that this is a necessary step if you want to go further on the quest for dark matter. And before you ask any question, this is the strongest argument I found out there favoring these mechanism, among others, namely that the boss said so. So it must be correct. She's the one. And I'll start first with a recap of what we learned in the first lecture. Namely, there is a magic number that pops out of notably cosmological observations that we should aim at explaining dynamically, namely this amount of cold dark matter, which is roughly one fourth or so of the critical energy budget of the energy density of the universe right now. And a slightly more physical way to rewrite this number is in terms of this dimensionless quantity, namely the number density of particles of this species. We convince ourselves that this is the most convincing, or even the unique way to explain the wealth of observations that we have, normalized by the total entropy density. And then there is the mass of these species that arbitrarily normalized to GeV. But basically, we should find a dynamical theory for predicting these and for a given mass of our species such that this product matches this, what is observed. This is the first empirical requirements for a good theory of dark matter. And it's also, if you wish, the first step before we can hope to find any signal of dark matter. Now, the plan of the lecture is the following. First, I will try to give you some heuristic argument for a dynamical equation trying to predict the current value of this variable. Then I will show you how you can use this equation to, at least at leading order, to describe the evolution in number density of different types of candidates. And as promised in the first lecture, I will show you explicitly that, for example, neutrinos do not work just because this kind of equation, this match is not reached. OK, the match between these and these. Then I will apply to the more popular, called WIMP candidates. I will come back and try to justify on a deeper level, more advanced level, this equation that I've introduced heuristically to start with. And then I will indulge in some technicalities on the Boltzmann equation. Again, if you lose me on that part, do not worry. It's just for those of you who plan to work on these things so that they know how to go forward. And then I will spend some time first justifying that sometimes you really need these more advanced tools, in particular, for warm dark matter candidates, for candidates which have some small velocity dispersions. So they are semi-relativistic, if you wish. And I will briefly sketch one example. And then I will spend some time also on direct detection of dark matter. I still consider it a sort of astrophysical signal of dark matter in the sense that the signal that you find in detectors underground depends crucially on the local density of dark matter, which is an astrophysical quantity. And on the phase space, or if you wish, the velocity distribution of the dark matter. Again, most of you are not concerned by this physics at all, but I'd like to show you how basic mechanical relations determine the exclusion searches plots that you might have seen here and there. And it's really billiard ball physics. Now let's start from really some basic relation to justify the kind of integrated Boltzmann equations people do use for performing calculations of relic abundances. So we saw yesterday that if nothing happens and there is a species which is isolated, the number of particles in a commuting volume is conserved, which is equivalent to say that n times a cube is constant. And the more elegant way to write is what I brought there, the n over dt plus 3 h n equal 0. And these three is what determines this cubic dependence there. But we want to go beyond that because this is not enough. As long as this is conserved, we don't have a dynamical mechanism to produce the right amount that we need. So what is needed is something, some mechanism that, as I said in the first lecture, when the interaction rate of these species with the rest of the plasma is fast enough, gives something that may be close to equilibrium, to thermal equilibrium of this particle. And in the limit where the rate of interaction remember the gamma and sigma v is much smaller than age, well, it should give back this kind of isolated evolution for species. Now, for equilibrium particles in the nonrelativistic regime, we know also the analytical expression, which is the one I wrote on Monday, which is just the Boltzmann suppression. It's dominated. The behavior is dominated by the Boltzmann suppression factor, the e to the minus m over t. Remember how this emerged. This emerged from n being like g integral d cube p over 2 pi cube of f, basically, where for f, I'm using now the Boltzmann distribution, basically, e to the minus e over t. And by imposing, then, you are in the nonrelativistic regime, you get this expression. Now, the generalization that I was talking about, allowing to get the collisional and equilibrium distribution in the high interaction rate limit and recovering this expression in the decoupled regime is the one I'm writing there. Let's convince ourselves that this is the case. What happens if the interaction rate, so let's compare this equation that I wrote out of the blue. For the time being, this symbol, sigma v average, is just formal. It has the correct dimension of a cross-section times velocity, but apart from that, you don't know how to compute it. Now, in the limit where, let's see what happens to the evolution of the density in time equal to this minus sigma v. And then you have n square minus n square equilibrium, where n square equilibrium is just this expression I wrote there. So in the limit where what I told you, gamma is much larger than h, what does it mean? It means that sigma v must be much larger than this one. n square, if you wish, is much larger than hn. And then what happens? You can neglect this term. And in order for the n for dt to be 0, basically you need this n to be equal to n equilibrium. You have to annihilate this part here. On the contrary, when this is very small compared to h, you can basically neglect this right-hand side. You only get this one. And then the evolution is the one I wrote before. So this is something I just wrote there. I will justify it on a more fundamental level later on. But we know that it has the correct behavior. Namely, when the interaction is very fast, your hand must track the equilibrium distribution for it to be consistent. When the interaction rate is very small compared to the expansion rate, then this goes back to the free expansion evolution, the conservation of particles in a common volume. There are some technicalities here that you might find in the literature. So in general, it's not written in terms of n, but it's written in terms of an auxiliary variable. This auxiliary variable is the y, which is nothing but n over s that we introduced last time. And you can easily write how y evolves in time. You just multiply by a cube numerator and denominator. But we know that this is constant. You see at least when the expansion is isentropic, there is no release of entropy or sink of entropy and so on. So you can get this out, then you expand the numerator, and then you simplify. So what does it mean that basically your expression dn over dt plus 3h, and this is equivalent to dy over dt times the entropy density? So you have a treat of the Hubble term just by rewriting it in something normalized to a density. This is a standard trick. Then the other standard trick that people tend to use is to rewrite this equation, not in terms of the time variable, but in terms of temperature variable. I told you that once you use the Friedman equations, basically you can interchangeably use, for example, the CMB temperature as a sort of clock variable. And this is what is done here. So basically what happens is that you can rewrite the derivative with respect to time, derivative with respect to x times dx over dt, and this is nothing but Hubble times x. And for the step-by-step variation is there. And at the end of the day, you end up with a compact expression, which is the one you see there. So it's this expression here. For this, normalize the density, which is at the right-hand side quadratically dependent on the variable and the equilibrium variable, which is, again, well-known. It's a known function of t, or if you wish of x. And here you see emerging exactly the kind of ratio we have been talking about, a ratio of interaction rate at the Hubble expansion rate. And just for convention, in the radiation period, since age is nothing but proportional to the square root of the total energy density, but this is now proportional to the temperature to the fourth, which means temperature square. x is 1 over temperature. This new variable that I have introduced, x. It's a dimensionless variable, which is 1 over temperature. Well, that's it. This is a factor x to the minus 2. I plug it in. And there is the factor x emerging at the numerator. And I can normalize the Hubble expansion rate to some arbitrary reference value of mass n. This is completely arbitrary. It's just a trick that you find in the literature. Don't be surprised. In principle, you can choose any value of mass still valid as long as you are in the radiation-dominated period. Why you need to be in the radiation-dominated period? Because you have to generate this dark matter early enough. Remember that you must allow the perturbation in dark matter to grow by the time the decoupling takes place. Otherwise, there is no time for the structures to grow. So you know that basically whatever happened to the dark matter must have formed early enough. So there is no basically loss of generality in writing it this way. And if you wonder what happens to this equation, if you want to take into account the fact that entropy can change and the Hubble expansion rate is not given by this simple expression, well, the generalization has been worked out. Remember these factors H and G that I've introduced. This is a way more accurate description of what's going on. However, in periods where the number of degrees of freedom that populate your plasma does not change, this is 0. This is a constant. This is a constant. S goes as x to the minus 3 because it goes as the temperature cubed, the density of the entropy. So it means x to the minus 3. And you simplify and you get this x squared. So this expression reduces to this one once there is no change of degrees of freedom during the period where the dynamical production of this species, the so-called freeze out mechanism of this non-relativistic species, takes place. Just as a caveat that you can easily write a more generalized expression of that. Now, what's the problem with this expression? Even if it's quite pedagogic and intuitive, you cannot solve it exactly analytically because it's a recut equation. And rather than just tell you, OK, you plug it in a computer, you write a little code and you solve it, actually it's instructive to see some limiting behavior of this equation. For the rest of it, what I'm saying, I will fix this arbitrary mass, which I introduced to normalize this auxiliary variable, to the mass of the dark matter. It need not to be so. You can choose anything, but people tend to do that because it's the natural scale in terms of which you can define if the particle decouples in the non-relativistic regime or in the relativistic regime. So that if you choose for the unit of temperature, the mass of the particle, if x is very small, it means that the temperature is much higher than the mass so that the particle is relativistic if it's in thermal contact. And if x is very large, it means that the temperature is much smaller than its mass. So at that time, the particle is, if in equilibrium, is clearly non-relativistic. Now, the way to see these two regimes is to rewrite the equation. This is nothing but some algebra. I can rewrite it in terms of a relative or sort of logarithmic derivative, but normalize with respect to the equilibrium value. This is a non-function. Again, there is nothing magic there. But this is a yet more clear example of what I was saying since the first lecture, namely that once the reaction rate is very large with respect to age, basically you expect to annihilate this collisional part of the equation, you must track the equilibrium distribution. When this is very small, you can just get rid of this term, you neglect it, and you have just a constant evolution, constant value for y. And again, this is nothing but a rewriting of the same equation. So you understand from this expression here that the key conditions that you have to, that discriminates between different regimes of your equation is this one, gamma equal h. Again, I anticipated that this was the criterion coming again and again in many cosmoparticle applications. And when gamma, the reaction rate is equal, computed with the distribution of my particles at equilibrium and equilibrium, is equal to the Hubble expansion rate, some change of regime in this equation happens, and so I have a sort of two analytical limits of these equations. Now, let's compute this condition for two cases. The first case is when you have some particle for which this condition is satisfied in the relativistic regime. And the second case will be a case where this is satisfied in a non-relativistic regime. Now, the former case is something that we know should have happened, and this is the case of neutrinos, because we can compute what's their interaction rate. And you can even guess that quite simply dimensionally, why this so. What is the interaction rate for neutrinos? n sigma v for a thermal species n goes as the cube of the temperature. And then a typical cross-section for weak interaction e g f s, s is the, in this case, is the s variable of particle physics, the square of the central mass energy, which means in a thermal bath, it goes roughly as g f t square, because the typical energy is of a cure for t. And the velocity is basically c, which is 1 in our natural units. So this goes as g f, sorry, square, square times t to the fifth power. And Hubble rate, we know how to write. This is nothing but the square root of Newton constants times t square, apart from numerical factors. So this equation you can solve, for example, for neutrinos. And that's what I'm sketching here. What's the result for the value of y that you get? It's some number, which is of order unity, times a factor. Well, this is a little bit more general. It holds for bosons and for fermions. And it holds for a particle with arbitrary number of internal degrees of freedom. And here you have this h f that counts the number of degrees of freedom contributing to the entropy density. And you have to equate that to what we observe. This is another way to write the same equation that I wrote at the beginning for relativistic relic, where I replaced in y this expression. I'm doing very basic algebra here. And you end up with four neutrinos. You can compute what's the epoch at which this takes place. This epoch is an epoch in which, in the universe, you only have photons, the neutrinos themselves, electrons, and positrons. All the rest is too heavy to be produced thermally. So you can count and get 10.75 for the number of degrees of freedom. And then you get this expression. So the contribution of standard model neutrinos to the total energy density of the universe times h square is given by the sum of their mass, assuming that now they are non-relativistic, divided by 94 electron volt. Why this is an important result? Because we have an upper limit to the sum of neutrino mass from experiments. Experiments being the tritium end point, experiments looking for the spectrum of radioactive energy distribution in radioactive decays, like tritium going into helium-3. And we have also constraints coming from the oscillations I mentioned yesterday. These constraints sum up to the sum of neutrino masses cannot be exceeding like 6.9 electron volt or so, which means that this ratio must be less than 0.1 or so significantly, which means that even if there was no problem with the structure formation argument that I showed before, there is not enough mass in neutrinos to account for the number we want to explain, OK? So once we cannot make with any relativistic particle that we know, and we know that we shouldn't be looking for a relativistic relic simply because even if we found an exotic relativistic relic that would match this omega, still we would have the problem with structure formation, then let's move to the other obvious possibility for the result, namely non-relativistic limit, OK? And we do exactly the same thing. You now impose this condition under the hypothesis that whatever particle is undergoing this result does so in the non-relativistic limit. Since I'm using the mass of the particle to define my units of temperature, non-relativistic result means that this variable x must be large with respect to 1, OK? Of course, if you choose a different unit, you have to rephrase this in the new units. But this is standard practice in particle physics attitude. You write everything in dimensionless form, and you get rid of units. And that's it. Then you just rewrite the condition gamma in terms of the new variable that you have introduced. What is gamma? Again, n equilibrium times sigma v. The n equilibrium is the Boltzmann expression that I've written before. But now you rewrite in terms of xf and the new mass that you have introduced. And this is what is written at the left-hand side, OK? And then you write the Hubble expansion rate in terms of these units. As simple as that, so you get an implicit equation to solve for xf, which is related to the particle physics quantities, like the mass of your unknown particle and sigma v, which is the cross-section time velocity with which it's kept in equilibrium with the plasma times the number of degrees of freedom of your unknown particle. And then you work out what y is. y, by definition, is the number of particles over s, which can be rewritten this way. Now you see that here, the only dependence of this xf, apart from these very weak dependence, is xf times the same expression entering here. You can rewrite xf to 3 half as xf times xf to 1 half times c to the minus xf. So you just plug this expression here times xf. And you can solve this by iteration. You do a dedicated guess for the value of xf, knowing that it must be larger than 1. And you can solve iteratively. And I mean, mathematically, this procedure does converge. And that's it. That's how you perform your educated guess on the value of the relic abundance for a non-relativistic particle undergoing freeze-out. This is completely generic, OK? Now, the result you obtain is a numerical factor times another numerical factor that here you know must be larger than 1 for the self-consistency of the calculation. Here you have the plant mass. Or equivalently, you can rephrase this in terms of Newton constant. The mass of the particle and sigma v. Now, the first important results that you can obtain is that the residual abundance relative to the total density of the universe scales inversely to sigma v, scales inversely to the rate with which these particles interact with the plasma. Is this intuitive or not? The more a particle interacts with the plasma, the less of it is around when it decouples. Do you think it's a weird behavior or a reasonable behavior? No idea whatsoever. Well, it's a very reasonable behavior. Why? Let's assume that you have a particle in equilibrium. Now, what is a particle in equilibrium in the relativistic regime if I normalize to a commovement volume or something like that, like s? It means that if it's in equilibrium and it's relativistic, basically it stays constant. And this is the regime when x is relatively small. Now, let's assume that these particles keeps staying in equilibrium till an epoch where the temperature of the plasma drops well below its mass. What does it mean? It means that particles have not enough energy to produce it. Imagine that you have a reaction like electrons, positrons, producing these unknown particles. I'm assuming it's self-conjugate, but you could have different particles and anti-particles. The situation is the same. Well, if it's in an equilibrium, it means that you have clearly these two reaction obeying the detailed balance. But there is an asymmetry here which is purely kinematical. These processes always permit it because this is a very massive particles. These are very light. These processes only permit it if the kinetic energy of this particle is sufficiently high to produce it. But by hypothesis, I'm saying that this particle keeps staying in equilibrium till a temperature which is much smaller than its mass. So it means that this is very suppressed because only a very far tail of the distribution has sufficient kinetic energy to do that, to overcome the mass barrier. And so the strongly coupled my particle is, the less of it is left when it decouples from equilibrium because it's only the highest and highest tail of my distribution that is stable to produce it. And this is what you see. This is the numerical solution to the equation I showed. And the growing the cross-section, the larger the cross-section is, the less is left this particle and the later it decouples. But since you are in a very exponential tail, actually, you don't move much in x, but you move very much in y-axis. If you plug in physical units in the expression, I invite you to do so, you get this kind of relation. The omega x times h squared is roughly given by 0.1 picobarn over sigma v. Now, why this number is sort of magic and people are excited about it? Because if you knew nothing and you don't know anything about the coupling of particles and you don't know anything about the mass of this particle, well, you write the order of money to estimate for the cross-section that the particle of mass m and the square coupling alpha, say, the spine structure constant has. And it turns out that for masses of around 100 gv or so, you get the right number to explain what you observe. And this has been named sometimes WIMP miracle. Why it's a mean? If you think this is an important relation, it's just numerology, you shouldn't be calling it a miracle in the sense that it's a sort of expectations based on the fact that you think that new physics related to dark matter is at the electro-wix scale. So since it's at the electro-wix scale, it's not a miracle that the coupling is at the electro-wix size and the masses of the electro-wix scale. It's only a miracle if this turns out not to be true. But anyway, call it whatever you like. This is one of the first hints that people took, I mean, realized already maybe 30 years ago or even more, that maybe there is some relation between new physics at the electro-wix scale and the problem of dark matter. Now, there is some exercise that people don't do very often. I don't like that you apply a concept to something you don't know before you apply it to something you know about. So you can do exactly the same exercise for particles you know about, protons and anti-protons, variants. So I invite you to do this exercise. You compute the relic abundance, thermal freeze out, of a plasma containing protons and anti-protons, particle of mass of order 1GV, with cross sections that if you don't know about, you can take from the particle data book, or if you know some particle physics, you can estimate that to be of the order of 1 over the pion mass square. And then you compute what's the relic abundance you find. You can compute omega proton, anti-proton, omega protons plus omega anti-protons, for instance, h square, exactly with the same formulas. This is the residual abundance you would get if proton and anti-protons that we see in our universe, actually, we don't see many antiprotons, but we would have seen in our universe if the production mechanism of protons was exactly what I've described. And just out of curiosity, you should figure it out what this number is and compare it with what we observe, what we obtain from CMB, for example, omega B. And you will see that there is a big mismatch. So we know that protons around the ordinary matter around is not due to this mechanism I'm trying to sell you. So we must even conceive at least one more mechanism in the early universe to get the right abundance of the protons and the normal matter we know about. But this is a different topic in particle cosmology. Of course, in real life, if you want to have a professional career in this subject, yes, please. Who tells you that you don't get the right abundance? The first question, yeah, I cheated. So the first point is that you don't get the right abundance not because a factor 2 or 3. You don't get the right abundance because a factor of a billion or so. So it's clear that it's not a minor adjustment. You need something more. And the hint that you need something more comes from the fact that there is another preservation that you should probably know about, namely that there are no big amounts of anti protons or anti particles around. There is an asymmetry. So most likely, the hypothesis that there was a plasma containing the equal number of particles and anti particles in equilibrium, blah, blah, blah, is wrong for ordinary matter. So there is some residual asymmetry. This asymmetry might be even small with respect to the thermal part that is holding through when the particles are in equilibrium in the plateau distribution for this y variable. But somehow, if there is a slight asymmetry in the exponential tail, well, this y decreases, you don't go down by a factor of a billion. You just stop at the difference between particles and anti particles because you cannot get rid of the excess of particles. There is nothing against which the excess protons can annihilate. So this hypothesis is wrong. So this is the idea. And then the point is to find dynamical mechanism that can create this asymmetry. Because if you believe the inflationary paradigm, for instance, you cannot really accept easily that this is an initial condition of universe. It would have been diluted away. So you must, it's very plausible that this has been created dynamically. So you must have some particle physics ingredient that accounts for it. And this is a big problem of biogenesis. Now, if you have a professional career in dark matter searches and calculations of the residual abundance of different candidates, the picture I just described might be a bit oversimplified. Why this so? One reason might be that you have many, many more particles in your spectrum. So sometimes you have reactions like particle number one interacting with particle number two going into blah, blah, blah, blah. So you have to take into account all this network. Another complication is that in all these, I'm assuming that all the other variables do not vary much with the temperature. This is true for age variable, the effecting number of degrees of freedom, entering entropy for G, the effecting number of degrees of, GF entering the density. But it's also true for Sigma V, implicitly what I told you. If the Sigma V is a strongly energy dependent or temperature dependent function, this is not true. It's a little bit more involved, the calculation. And you have seen what Planck, for instance, has given us. It gives us percent level precision determination of omega B. So theorists would like to be able to predict at least at the same level the abundance of your dark matter. And this is why people have been developing codes either in supersymmetric frameworks or in a more general context where you just give a Lagrangian of your model and the code deals with the rest. What are these kind of situations where you must be careful? In general is when you have, for example, a resonance in the right kinematical conditions for your decoupling, you may have a thresholds effect. You may have non perturbative effects. It has been very popular in the last few years, this Sommerfeld announcement or so. So long distance physics effect. But anyway, the message is if you have more or less conventional dark matter candidates in complicated framework, there exist codes that deal with that automatically. However, if you are creative enough and you cook up a model which is weird in the behavioral cross section with energy, please check twice because maybe these codes do not include that case. And so you should be careful when you perform your calculations. And although this is just, this is a few weeks school on astrophysical and cosmological topics. Let me at least mention that one of the reasons people believe that this framework is linked with collider, it's independent from the wind miracle I just mentioned. This is a different reason. Namely, it's not enough to have just new physics, right? And it has a non relativistic particle which has weak coupling and which has roughly weak scale masses. You also want it to be stable and to be stable on cosmological scales because if this new particle, like all the new particles that we have discovered basically were to decay very quickly, well, it's not a good dark matter candidate, right? So in general, you need some mechanism to protect your new particle from decay into something else. And one, it's not unique, but one of the simplest mechanism is some sort of parity symmetry, Z2 symmetry so that whatever new physics state you produce, you should produce it in pairs to conserve this parity so that the standard model particles and the new particles have different parity. I'm calling it parity, but it's just a new symmetry I made up. Why this is appealing to particle physics for non cosmological reasons? The reason is that if you do so, it's much easier to reconcile the negative result of searches for new physics with what we have, with the existence of a new particle at the scale of weak scale. Why this? So if you had a particle which does not obey these sort of parity conservation rules, you might write down fine line diagrams like that, okay, where you exchange this new particle. However, if you have this parity conservation rules, this cannot be written because here in this vertex, you are not conserving parity, right? This parity is a plus one and this is standard model particles so they have the wrong parity. Okay, so it means that new physics can only appear in loops in containing these new particles. And if you have loops, for those of you who have never seen that, forget about it, but for those who have, if you have loops, it means that you can suppress the contribution of these new physics to observables. So it's a sort of qualitative reasons why the fact that dark matter requires stability over a long time scales might be connected with a symmetry which is also useful for something else, namely to suppress the contribution of these new physics to things you should have seen or maybe suppressing it to other things like the proton decay that has been searched for with negative results and so on. And the good reason, the other good point is that typically, many models of new physics at the electro-wix scale, this additional symmetry is available, so to speak. Okay, it's not obvious that it should be preserved but it's available. This is the case of our symmetry in Susie or our parity in Susie, Caluzzo parity in extra-dimensional models. You have T parity in so-called Dictalix models and so forth and so forth. Now, the other expectations from these models of physics at the electro-wix scale is that whenever these conditions are realized, you do expect to have some relic in the early years. In a certain sense, if this framework of particle physics is correct, there is some fraction of dark matter that is made of this wimp. The converse is not true, okay? It doesn't mean that the totality of dark matter must be made of it or that if you don't find these scenarios validated at LHC, then you are sure that dark matter does not exist. So that's the only thing I wanted to say about links with colliders. And another thing which is useful also for what we will say, yes, sure. It's rather a sort of constraints in the mass coupling plane because the less coupled they are, the lighter you can allow them to be, okay? In reality, it's not that simple because LHC is not really sensitive to stuff which is too light. Below, say, 10G-ish stuff, in reality, it has no sensitivity because you have a sort of finite resolution in your trigger condition. Anyway, this is a game that is being played. A lot of people do that either in these models, supersymmetric models, extra-dimensional models, and so on. And the only, I would say, model-independent statement I make is that the sensitivity is usually not, is comparable to the sensitivity that we have in direct detection experiment, okay? Underground searches I will mention in a while. And for lighter particles, you have older constraints like LAP, electron-positron collider machines. But in general, they are comparable. So they test roughly this kind of coupling, say, for electro-weak coupling, masses of few 100GV, maybe 100GV, or so depends a lot on the framework you are working with. And this is the key diagram in the program of the discovery program for WIMS, which is the most popular candidate for this dark matter. This is the diagram that describes the process I was writing there. So the fact that maybe you have some standard model particle that in a plasma, for example, produces by some unknown interaction pairs of these particles, which may be self-conjugated. So their own antiparticles or even be asymmetric, this depends on your model. And this diagram, so this process and also the opposite one was going on in equilibrium in the early universe. Nowadays, it cannot go on in equilibrium, very least because of energy conservation, because of the kinematical conditions. However, this process in this direction can get still on in regions where you have a high density of dark matter. So even if the probability of dark matter to annihilate is low, if you have a lot of dark matter, there is some hope that it self-annihilates, it finds some other particle, it self-annihilates, and it's up producing some standard model particles. The opposite is not happening spontaneously in nature just because, for example, the quarks, the leptons and so on do not have enough energy in general. But you can make it happen in colliders. You just smash things together and you make up for the missing energy that you need to create these massive particles by kinetic energy. This is just Einstein's e-conversion of energy into mass. Or you can read this diagram this way. So basically, you can have sort of scattering of dark matter onto targets of ordinary matter, okay? And this kind of recoil searches for a nuclei that just out of the blue have a kink is what is going on in the underground facilities, a little bit all around the globe, and it's known as direct searches for whims. And then, of course, you have indirect searches that look for this rare receivable annihilation to standard model particles. And since you might have many of them, you may try to cross-correlate anti-matter searches with gamma ray searches, with neutrino searches, and so on and so forth. And this is known as multi-messenger approach to dark matter searches, okay? Ideally, you want to find different signals and different techniques and check that these are consistent and maybe constrain your model sufficiently well that you can compute the relic abundance. So you can plug values, reasonable values, for the sigma v, the mass in my previous formula, and check if it disagrees with, for example, Planck Determination of Omega. That's the plan, and we are quite far from it, but that's the idea. It's not unique mechanism. You may have other mechanisms. For example, a hidden assumption that I was making before is that you start from a situation where the particles, these new particles are interacting fast enough that they are in equilibrium with the plasma. This might be not true. If these particles interact too weakly, maybe they were never in equilibrium with the plasma. Okay, now this has been known since ever, but now it has been resurrected under the name of a freezing mechanism. Okay, what does it mean freezing? It means that you just have enough energy to produce from time to time these particles starting from, for example, fermions in the plasma, electron, positrons, quarks, et cetera, but you don't have enough interaction strength to ever bring these in equilibrium to satisfy these Boltzmann distribution or the relativistic equivalent. In this case, the equation I wrote misses the other term. There is no y squared minus y squared equilibrium term. You only have the equilibrium part which is nothing but the distribution of this equivalent to the distribution of these equilibrium particles in the plasma, and that's it. And then in this case, the final abundance of this dark matter is just given by the integral of this right-hand side that depends on x in general. Though again, this depends on the detail, but you see immediately that the final abundance of this when x is very large, which means at very low temperature is proportional now to sigma v. Before we got the results that it's proportional to one over sigma v, but if the particle never reaches equilibrium, it's proportional to sigma v. So in a sense, you remember what I was showing you before for this evolution of the y abundance as a function of x? I was starting from a plateau, and then I was saying when x is much larger than one, let's say one is here, what happens is that this goes like that, and then at some point you have a departure. But I was a bit cheating in the sense that I was assuming that you start from this condition, but maybe if you go back in time, maybe at the inflationary re-eating conditions, in reality what you have here is some situation where this is growing and reaching the thermal value. But if this growth is very modest, it might be that you never reach this plateau, okay, and you just stop at some point. This is known as freezing. This is non-termal. If this is true, there is no temperature associated to dark matter. This is a non-termal distribution. The only thing you know it's cold, but that's it. It may have a very strange distribution of energy, your dark matter fluid, which clearly reflects the distribution of energies of the fermions or the bosons that produce it, integrated over history, but it's not a thermal relic. This is not a thermal relic. The only qualitatively, this is a very model dependent mechanism. That's why people tend to forget about it and from time to time revise it. The only qualitative feature is that usually, for this to be dominant, either you need the coupling to be small or the masses to be large. Why? Because sigma v in general is something like coupling, well, you have powers of coupling, depending on the order and perturbation here, then in general you have some propagator, some scale, which is the square of the mass of the relevant physics, right? And if particles are very heavy or the couplings are very small or both, you might end up in this kind of situation. But if couplings are very small or masses are very heavy, it's very hard to detect these particles, either way. So in general, this I have harder time to produce any observable signal, unless you go to other kind of sub-leading physics. For example, this particle might be unstable on cosmological time scales, so you might see decay products of this particle. But this is a mechanism that works, for example, to produce particles. You can easily have PV scale particles, like millions of GV, no problem by this mechanism, okay? So let me just spend a few minutes at least catching a little bit more rigorous approach to the equation that I introduced heuristically before, the Boltzmann equation. In absence of processes that change my distribution, you are very well aware of the Luehl theorem, so the conservation of the density in phase space under the Hamiltonian's law. And it's written like that. This is the standard expression you must have seen in some analytical mechanics course for the first time. And in absence of collision, that's it, okay? You can also rewrite using the Hamilton's equation and you know that x dot is basically the momentum and okay, I'm multiplying times the mass and here p dot is nothing but the acceleration, so if I multiply by the mass, you have the force. And this is nothing but the conservation of a volume of phase space under the Hamiltonian's law. However, if there are processes that can create particles or give a kick to particles and accelerate them or decelerate them, these need not to be true. You may alter the conservation of this quantity and in general, to account for it, you do not have a right hand side which is zero but you have some collisional term. Now, this collisional term means that at microscopic level there are processes that can, for example, create particle, annihilate particle and so on. And since these things are intrinsically quantum mechanical in nature, that's the reason why most likely you have seen these collisional terms in quantum mechanics courses or like Fermi rule, the golden rule that gives you an idea of the rate for some transition probability to happen or you might have seen them in quantum field theory classes but in principle, you could define effectively a classical equivalent of this collisional term. Of course, this piece here is purely classical in the sense of non-relativistic. You may write down exactly the same thing in a relativistic context. Again, in general, you don't use the time, use some fine parameter lambda. When it's allowed, you can still define some time parameter and you have conservation of your phase space density along the evolutionary flow, along geodesics. If there is nothing else that alters the flux and the generalization of the previous expression is this one, okay? This is just, again, a reasonable expression. If you want more details, a nice monograph on these topics is, for example, Bernstein kinetic theory in the Earth universe. And then you can rewrite these terms properly. All of you that have some notions of general relativity know that this is the generalization of a force, okay? And this is the momentum. So the force is nothing but the gradient of a potential for conservative fields, okay? So it means derivative of a potential. In general relativity, you have the potential of your metric and the first derivative of the metric are Christoffel symbols. So it's not surprising that you end up with this kind of expression, okay? If you want more rigorous expression for the freedom and the metrobial sum worker, I'm even listing here the term that matter in this expression. So at the end of the way, the relativistic generalization of the Boltzmann equation is nothing but for the freedom and the metrobial sum worker universe is nothing but this one where I replace now the UV operators with that. And it makes sense, right? Because now you had this mass times the F over dt. Now this is replaced by E, the F over dt because this is the relativistic generalization of the energy of your particle. And in general for the freedom and the metrobial sum worker, you have to account for the expansion of the universe. So this is a sort of force term on your expression. You can even divide by E. And so let's check that this kind of expressions give us back the heuristic equation I've wrote you, dn over dt plus three hn equal sigma v n square minus sigma v n square minus n square equilibrium. And I'm just sketching it so that you have an idea of that there is some more fundamental vision of the equation I just wrote down. How do we check it? I will just integrate this and this over momentum because the equation I've wrote for n is integrated over momentum and is the integral of F over dqp, okay? And I will check that what I get is the equation I wrote at the beginning, okay? So very quickly and I invite you to redo the steps as an exercise but I'm quite detailed in the notes I'm giving you. So you integrate over momentum. Now remember I have also divided by E because the UV liberator in the relativistic context as an EE here, so I'm really integrating L over E, okay? So if you do this immediately here you realize the, for those of you who have had quantum filtering or some particle physics courses, this is the relativistic invariant phase space element apart for factor two. And then this is the development of the operator that I'm writing down. Why this is important? Because it means that this is something that is invariant. It's an operation invariant that I'm doing, right? I write down this expression that derives from writing this generalized equation in the Friedman-Lehmetz-Roberto Walker. This is the operator I end up with. And then I integrate these by parts. The second terms goes by parts. Remember that distribution of particles vanishes at these streams. I have zero particle of infinite momentum. And so you get rid of the finite part of it. And what is this? This is nothing but the derivative with respect to time because I can bring it out here. There is no dependence of my volume in P by time. And this is nothing but the integral that defines N. And here I get, as the other term, three times H. You recognize here H, times N. So I've gotten exactly the right-hand side of the equation I wrote at the beginning. Now I should get also the, sorry, the left-hand side. Now I should also get the right-hand side. What was written at the right-hand side? Well, maybe it's still here somewhere, no. So at the right-hand side, I had this minus sigma v N square minus N equilibrium square, if you remember. But this was just formal. I didn't know how to compute it, right? So if you had any class in quantum field theory on particle physics, you should know how the computation of a collisional term is if not just forget about all the details about the factor N square that I'm going to derive. So this is the right-hand side of my Boltzmann equation. And this element, apart from factor two that comes from avoiding double counting of your processes, it's more technical. You can rewrite, this is something that I assume you know if you don't know, don't worry. But this collisional operator that you define for particle physics processes is nothing but the integral over the momentum for all the particles, but the particle concerned. For the square of the matrix element, model square of the matrix element, imposing momentum conservation times the phase space factors. Again, if you have never seen it, there is no way I can explain it one minute. But the only difference with respect to the calculation you might have seen in particle physics is that in general, in particle physics, they have nothing to do with thermal distribution and in particle physics, I am not integrating over a wide distribution of my particle, because my particle is not in a thermal plasma, basically. That's the idea. Here I do integrate, because by choice, I'm checking my naive equation. So I have also an integral over the phase space element for the A particle. And the other funny thing that you might have may be familiar with is the fact that here you have strange factors, one plus F1 or one minus F1, one plus FA, one minus FA. And in practice, this plus factor and minus factor, minus one factor, refer to bosons and fermions. I'm accounting for quantum statistics, but you might have heard of the corresponding phenomenon. If you have fermions in your system, they are subject to Pauli blocking. And this is accounted for by this minus and if you have bosons in your system, this is they are subject to possible boson-stein condensation, okay? So take it very qualitatively if you are not familiar with it. The good thing that should be understandable by everybody is the fact that if you are in equilibrium, I'm using now the classical approximation. So I ignore these factors, but that's okay because I'm concerning about a non-termal, sorry, a non-relativistic species. So the classic approximation is fine. In the absence of boson-stein condensation and degeneracy. Well, basically this simplifies in FA times FB minus F1, F2. And the process I have in mind is AB going into one plus two, okay? Now, A is the species I'm concerned about my dark matter particle. One and two are just fermions, for example, my bath, electrons and positrons you may think of, okay? Now, these electrons and positrons are in equilibrium in my plasma, so I can write down the Boltzmann expression for them. And this is also equal to the equilibrium distribution. So I can replace this by the Boltzmann, Maxwell-Boltzmann distribution. And the Maxwell-Boltzmann distribution formally for F1 and F2 is also equal to the product of the Boltzmann distribution for A and B. Why? Because of energy conservation. For the equilibrium particle, okay? If you just write down this, this is going to tell you that E1 plus E2 must be equal to EA plus EB, which must be true if this is an equilibrium distribution. And so I get this expression. Once I integrate now this expression by definition, well, you recognize here that there is some integral over phase space. If you ignore all the rest, there is some integral over phase space of A over FA. And the same is true for FB. So it's not surprising that here you have NN square because this is just a dummy variable. This is perfectly symmetric in A versus B. So this is N square. And the same is true for the equilibrium distribution. So again, I have now an expression that behaves like what I've wrote with the difference that if I knew some particle physics or quantum field theory, I have an expression that allows me to compute this sigma V. What is this? This is the amplitude of my quantum mechanical process, modulus square. So if I know the Feynman rules correspond to my physics, I can compute this sigma V exactly in a more refined way as I did for neutrinos where I know what's the interaction strength, okay? And you can take this as the definition of the thermal average annihilation cross-section. So by this microscopic point of view, what we have gained now is nothing but an operational way to compute the sigma V. If you don't know what I'm talking about, you cannot compute the sigma V, okay? There is no free lunch in physics, of course. Now, you may say, okay, apart for the fact that now if I have my fundamental theory, I have my final rules, I have my particle physics model, I can compute really the numbers entering my equation, is there any added value in this equation that I wrote down that generalized the Boltzmann equation with full momentum dependence and so on and so forth? And the answer is, if you are interested now to the momentum dependence of your relic particle, the answer is yes. In order to compute the spectrum of my particle, you need to know its velocity distribution and so you need the differential form of the Boltzmann equation, not the integral form that is equivalent to the simple equation I wrote, okay? Now, we know that particles to be good dark matter candidates must be cold. So in general, you can approximate this as a sort of delta of P equals zero. So you can forget about the velocity distribution but there are some candidates and they are called in general warm dark matter candidate because if they were hot, they would fail. We are convinced about that, we saw it. But maybe there are a little bit warmer than this naive model where there is no dispersion, no velocity dispersion. Well, then you need this kind of tool, okay? And I won't go into the details but let me at least name one example or a popular example of this class of candidates which are sterile neutrinos. This is not surprising. Again, if you know what I'm talking about, yes, you will recognize some things that are familiar to you. If you don't know what I'm talking about, just think physically, neutrinos fail the test to be dark matter candidates. Why they fail? They are too light and they are too hot. If we make them a bit heavier and a bit less interacting, they would fit the dark matter picture, right? So what do you do? You introduce another species which interacts less with the standard model. It's suppressed through a mixing angle, particle physicists would say, okay? And then you make it heavier. So if you have some other master, it could come from a CISO mechanism with right-handed states which are heavier, blah, blah, blah, blah, blah, blah. Fine, at the end of the day, you have two effective three parameters, the coupling and the mass. And you can fit them so that this is sufficiently cold and massive enough right amount of mass to have a good dark matter behavior. This is the simplest model you can think of and to account for dark matter. It's a very minor extension of the standard model and in its easiest incarnation, it's known as Dodelson-Wheedrow model. And these neutrinos are thermal relics. But for these thermal relics, you really need to account for their spectrum in order to understand if they are good or not sufficiently cold if you wish to be a good dark matter candidate. And what I will report on the transparencies if you want to do, there is a simple way to compute their spectrum through the Boltzmann equation. Okay, so the Boltzmann equation in its differential form allows you to do more than just compute the Y variable for cold dark matter. You can use it to compute the velocity distribution of a candidate which has more complicated in principle dynamics. And they might be good dark matter candidates. Okay, in the case of Dodelson-Wheedrow sterile neutrino, this is not anymore the case. It used to be the case, but it's not anymore the case. So people have to go through extra complication. You have to assume maybe anisotropy and so on and so forth. You have to make them a little bit colder if you wish. And I won't go into the details, but for the particle physics oriented among you, you find these details in the notes. The bottom line is that if you make them sufficiently cold, you may end up with a phase space distribution for massive neutrinos, which are cold enough to satisfy all the properties I told you about, right? They should create the structure of the right type to match larger structure and CMB, but sufficiently warm with a sufficient velocity dispersion that they might change something, a very small structure, a very small scale, sorry. So you can compute through these tools the power spectrum that they should have. And this is a numerical calculation coming from first principle for particle physics principle. For example, they have some mass of three kilotron volt, some asymmetry, which is needed to make them colder and basically to get this peak, okay? Where the power spectrum, the large-scale structure power spectrum compared to the pure lambda CDM power spectrum is exactly the same at very large scales, but at very small scales, this is suppressed. Why it is suppressed? Just because this particle freestream, they are hot enough that they are not as efficient as cold dark matter in clustering at very small scales. And this is a possible way to test, for example, some particle physics property of dark matter through cosmological tools. And of course, in order to do this kind of calculation, I mean, these are quite complicated numerical calculation if you want to do precisely, but there is a simple way to estimate the typical scale at which this departure from the lambda CDM happens for warm candidates. And the first thing you do once you have a potential candidate, which is of this type, you compute the freestreaming length, okay? It's the typical scale over which this particle freestream over inhomogeneities. And this is the recipe I'm sketching you, okay? I don't think, I mean, unless you are really interested into that, this is not an essential point. I'm just making the point that for something a little bit beyond the standard WIMP dark matter candidate, you may need a little bit more advanced tool, but these more advanced tools as rewards in the sense that it allows you to compute observables that are potentially accessible through surveys, for example. Okay? Now, what is the price to pay that you know that dark matter should be cold enough because lambda CDM works so well? So you must make sure that the freestreaming, this is the commuting expression, this freestreaming must be sufficiently small scale not to perturb too much the phenomenology. And you see that immediately for the case of neutrino candidates, these singles out several keV as mass. By the way, you know that the mass cannot be below that. Why? I told you these are fermions. If the mass was much smaller than several keV, the Paul extrusion principle or the Gantt remain bound would tell you that you cannot pack too many of them. So if they are too light, since you cannot pack too many of them, you have not enough mass to have a good dark matter candidate, okay? So we go back to something I told you last time. Let me conclude final part of that on the direct detection search strategy. Now, this is something very, very popular. Yes. If you have questions on previous part, this is the right moment. Semi-relativistic. Semi-relativistic. Means that the V is of, well, it depends. In the case of Doddison, Widrow, Neutrino, it's the answer is correct. It's semi-relativistic. In the case of the modification that you need to make these models viable, in reality what you have, you have two populations. One which decouples basically semi-relativistic, but it's suppressed. And another one which is produced in a different mechanism, which is a resonance, okay? Now maybe I can go back to this distribution. This is the phase space distribution times momentum square for a viable model of sterile neutrino produced with a modification of the thermal but non-cold decoupling mechanism. So how it is made of? It's made of a non-resonant component which is exactly produced like the WIMP stuff I was telling you is just hotter. But it's sub-leading. And then you have a sort of peak, a peak with a tail. This peak is produced through a resonance. And this resonance for those of you who are familiar with the MSW effect in the sun for neutrinos or have heard about that is something very similar to that but it does not resonate with a condition dictated by the density of the matter. It's really resonating with a condition dictated by the self-refraction potential of neutrinos themselves. This is a lot of fun physics. It's a very nice quantum mechanical phenomenon. And it may find application in the early universe. So this has nothing to do with weak-scale cross-section. It's just weakly interacting particles which have mixing again quantum mechanics. It may be even a non-linear phenomenon. And in more elaborated models, these are symmetry onto which they resonate. So there is a difference between the density of neutrinos and anti-neutrinos, just like there is a difference between the density of protons and anti-protons in these models. And this difference might be even dynamical. In these models, in more elaborated versions of these models, this difference has exactly to do with the difference between variants and anti-variants. So the mechanism responsible for the creation of protons, asymmetry with respect to anti-protons, is related to the resonant condition through which you produce dark matter, which is quite cool, right? But it's a completely different dynamics. And what's the mass scale of this stuff? 10 kV. And what's the interaction cross-section? Or there's some magnitude below the Fermi scale. And before I told you about this freezing mechanism, there you may have 10 to the minus 5 coupling, like a coupling thousand times, well, even 10,000 times smaller than what you are used to in the standard model, masses of the order of a million times the mass of the proton, and you get another class of dark matter candidate, some very heavy relic that never reached equilibrium. Here you have something that partially was in equilibrium and partially not. You may have everything in between. Now, for stuff which has cross-sections, not far from weak scale, you may have this strategy. I was depicting you before this diagram, right? There is this WIMP magic diagram where you have your WIMPs here and the finite state standard model. Now I'm reading, this is my exotic particle, this might be my fermions, this might be my quarks, for example. I can read it this way, where they scatter over a nucleus and have a recoil kick, and the same happens to the nucleus. And this mechanism is known as direct detection. Elastic scattering in the simplest realization of some dark matter particles onto a nucleus. Now, the first thing you have to realize is that if you do this experiment and look for stuff recoiling against some nothing, well, in these conditions, you will find lots of events. Why? You have cosmic rays, you have natural radiativity, you have lots of things going on. So the only reasonable way to perform these experiments and looking for recoils is to go underground, to be shielded from many parasitic effects, which have nothing to do with dark matter, and that's why physicists try like caves or tunnels to do these experiments. And they're sometimes also like Roman lead that is found under lakes of ships that fell there because it has been protected from activation of cosmogenic type. And so it's very pure and does not, you can use it to shield your equipment without polluting your detector with radiativity, okay? So this is a very interesting challenge in technology and detector design. So you have to be careful enough that you do not introduce too much noise, but in principle, you may measure recoiling nuclei against kicks of invisible stuff. That's the idea. So what kind of observables you may hope to measure? The rate of these kicks and the spectrum of the recoils. So the energy distribution of the nuclei that recoil. And then in principle, you have other observables, for example, the time dependence of these effects. And then in principle, you may even be able to see the direction through which the nucleus is recoiling. And of course, you have the choice of different targets. This is the program. Now, it's quite far from the main subject of the school, but let me explain the physics of these effects. Now the standard formula is the same formula, but rewritten in a clever way so that you do not recognize it of the right hand side of the Boltzmann equation. So the rate of interaction is what? Is nothing but the cross section times the velocity, the relative velocity, times the density of my particles of dark matter, but now I'm writing it as energy density divided by mass because it's more confusing. But this is n. I was calling it n before. And then I have a number of particles in my detector. So the rate per unit volume would be again n of the target particle. But if I integrate over my old detector, it's the total number of target nuclei. This is the rate that I expect for events. Induced due to dark matter. Of course here, the row is not the row in the early universe. It's the density of dark matter around us. And this is where things like determination of the dark matter distribution in our galaxy are very important. If it's 0.4 GV per centimeter cube or 0.3 or 0.6, you get factor of two or three here. Now, things are a little bit more complicated. Why? Because in principle, dark matter does not, in our halo, has not a fixed velocity. It's not a monochromatic beam. It has a distribution of velocity. So in reality, I should be writing this formula by integrating over all the velocity of the dark matter. And the second thing is that this is the total rate. If I want the differential rate with respect to the recoiling energy of my target, I should write down the cross section in differential form. These are just the two complications to this formula. And that's what they are. I'm rewriting the same formula in a differential form with respect to the recoiling energy of the nucleus, which depends, of course, of the differential, of the dependence of energy of my cross section. And then I'm integrating over the distribution over velocity of my dark matter. This has been, now, it's chosen in such a way that the integral over velocity of FV must be equal to 1. So the probability of my particle having velocity V. And in general, in the halo, I will have particles ranging in principle from zero velocity. But zero velocity gives me no recoil. So if a particle at rest eats another particle at rest, the energy transfer is zero. So you don't see it. So in practice, there is a minimal velocity to which your experiment is sensitive, too. And then there is a Vmax. A Vmax is determined by the high energy tail of my particles in the galactic halo. And this is not infinite. Why? Because particles of very, very high velocity are not realized. They are free to escape, right? So there is a maximum velocity, which is a complicated quantity in the sense that it's globally dependent on the shape of my halo and the amount of dark matter there is there. So there is some astrophysical uncertainty there, but it's not dependent on the detector. It's something given by the cosmology and the astrophysics. Now, just to show that these are very complicated calculations that can be done by freshmen, the problem we have to solve is to compute the recoil energy of a billiard ball eating something at rest, which has a given kinetic energy. It hits my nucleus. And then this is the final configuration in the lab. My nucleus has a recoil. Of course, dark matter has another recoil velocity, which I don't see. These are the angles I've chosen. You impose the conservation of energy, and you impose the conservation of momentum in the two directions. I think most of you have done this exercise a few years ago. If not, it's a nice exercise. So you end up with an expression for the recoiling energy of my nucleus, which is nothing but the proportion to the incoming dark matter velocity times a factor, which depends on the mass of the dark matter of my nucleus, which is called the mismatch, the mass mismatch. You are familiar with the fact that if we try to hit the most efficient way to transfer stuff is when the two things have the same mass, if you wonder why you do not play billiard with balls of different sizes and mass, it's because it would become a mess to have an intuition of what's going on. And then there is some angular factor. And then it's useful to rewrite this angular factor not in the frame of the lab. These conditions have been imposed in the frame of the lab. Why this so? Because the cross-section, I know what's the typical behavior of the cross-section in the central mass frame. For example, for non-relativistic cross-section, et cetera, these are basically isotropic. So it's useful to rewrite this in the center of momentum frame. And again, this is a Galilean transformation. Again, a very complicated calculation. But it's very nice to do that from time to time. You remember some nice trigonometric relations like these ones, et cetera, et cetera. So I did this last night. I redid it last night. Yes? What's the question? What time? For fun. No, it was just for, yeah. It's nice to get the signs wrong, because you always get the signs wrong, especially after midnight. But yes. And then you end up with this expression. It's exactly the same expression I wrote before. But instead of having a cosine of an angle written in the lab system, you have now an expression in the center of a momentum system. And so when I tell you that the scattering is isotropic, it means that it's basically this sigma over d cosine of theta is constant. Now, I know what the properties of the cross-section intuitively computed mean in this frame. But it's exactly the same thing. Now, this is the expression we were looking at. We were looking for the recoil energy expressed in terms of the dynamical quantities that we care about. This is the reduced mass of the dark matter nucleus system. This is the velocity of my dark matter particle. This is the mass of the target. And this is the angle of recoil. Now, what's the maximum of this expression? This is maximum 2 when there is a completely back reflection on my particle. So I can write these dependence of the recoil energy from the angle just like minus e max over 2. So simple. Now, the more complicated part is the cross-section. It depends clearly on the kind of particle physics that enters your game. Now, there are some simplifications. The simplifications are that you are dealing with non-relativistic particles. This is true. By definition, we want that. Now, the other simplification is that the diagram you are interested in is this one. Dark matter going in, eating a particle, and my nucleus, for example, dark matter going out, and recoil nucleus. So it's a four fermion if dark matter is a fermion, or four particle operator. So you can really write down an effective theory, an effective field theory, just like Fermi theory for weak interaction was. The only thing is that you know have some symmetry principle to fulfill. Like, for example, the ultimate theory you think it's a Lorentz invariance and so on and so forth. So the good thing is that there is a finite number of invariant structures that can rule these kind of interactions. And there is a finite number of kinematical quantities that matter, for example, the spin of the two particles, the transferred momentum. So whatever your cross-section is must depend on these few important variables. And then according to the spin of your particle, if it's a scalar, if it's a self-conjugated fermion, like a Majorana particle or a Dirac particle, if it's a higher spin, like a vector particle, and so on, you may select different type of interactions. Now, the ones that are very much recurring in many cases and in particular are typical for neutralinos in Susie, which is the most popular candidate, only give rise to two type of interaction. One which is spin-dependent and one which is spin-independent. Now, the spin-dependent, what does it mean? It means that this comes from the axial vector interaction for those of you who know what I'm talking about. Anyway, practically, what does it mean? It can only couple to the spin of your target. But if you remember the shell model of nuclei, at the end of the day, the spin is only determined by the one or few nucleons that are out of the completed shells. So basically, this couples to a single particle, so to speak. However, the spin-independent term interacts with all nucleons of your target. And since in quantum mechanics use some amplitudes, use some a times the amplitude of interaction. If it's a coherent interaction, which means that in the cross-section you do the model square of your amplitude, it means that your cross-section is proportional to a square, where a is the number of nucleons in your target, which means that in principle, experiment are usually more sensitive to spin-independent cross-sections than to spin-dependent cross-section, just because nature gives us a factory square enhancement. This is not completely true, because it might happen that the wavelength, the characteristic wavelength of your system, is such that the particle resolves part of the nucleus. So there is a loss of coherence. And this is accounted for by tabulated form factors. So this is the way people usually write down this differential cross-section. This is given by what I computed before. This is the so-called cross-section in the point-like approximation. And this is a factor which is less than 1, which accounts for the possibility that you lose coherence on your nucleus. So just one word on that if this is unfamiliar with you. In general, if you write down an amplitude for interaction with a nucleus, OK, you take out the a factor, the mass number of your nucleus, but you should integrate your amplitude over the distribution, the mass profile of your nucleus. And here you have in the Born approximation, if you remember that from basic courses, e to the iq times x. q is the momentum exchange. Now, according to the size of this momentum exchange, you can sample a smaller or larger fraction of this. In the limit where this is basically 1, you can integrate over the whole nucleus. So this is 1. And so you recover what I was telling you. If, however, the deviation from 1 of this is relevant, you have a factor which is less than 1. It's a coherence factor. And this is accounted for through this form factor. Why I'm telling you all this story? I'm just telling you all these stories because different nuclei are characterized by different profiles. So in principle, they do not have the same sensitivity to the same kind of dark matter candidates. And in principle, you have also uncertainties related to these nuclear physics when you perform these kind of experiments. So just to conclude, a couple of notions that are interesting. The bottom line of what I told you is that the differential rate of recoiling particles goes as this expression that I've derived, where this integral i, curly i, now depends only on an integral over the velocity distribution of my particles, depending in particular on this v min, which depends on the threshold of my detector, and the mass of my candidate and the mass of the target nucleus. And now we should know something about this velocity distribution, which I didn't specify yet. First, let me tell you that people assume that it's a Maxwellian distribution. And I will just comment on why this is more or less justified. But if you assume that it's Maxwellian, then Maxwellian means quadratic in the exponential of my decreasing exponential. This integral gives you this expression. And now you have all the elements to understand every plot, exclusion plots people will show you at any conference you will attend. Why this so? Because you have seen, perhaps, exclusion plots that say that you have excluded dark matter cross-section with matter, with nucleons, above this curve as a function of the mass. Why does this shape? Well, if you are in the limit where the dark matter particle is very massive, say, several hundreds or thousands of GeV, what happens? What happens is that the integral in question here is independent of the mass. Why? Because the reduced mass is always smaller than the smallest one of the two. And the limiting factor is when the heaviest part is very heavy. So in these limits, this stuff is nothing but the mass of my target nucleus. So the only dependence on the mass of dark matter comes from this factor. So that means that the heavier dark matter is, the less the number of particles I have to account for a given dark matter density. And so this scales as m. In the low mass regime, however, what matters is this dependence on V min. And here you have V min squared in the expression. But V min squared is E r times m a times the reduced mass. But now reduced mass is dominated by dark matter. So you have something that goes as exponential of minus 1 over the mass of the dark matter, which is a steeply rising function. So these experiments are not good to test very light dark matter. And I will just conclude on the remark. Why this Maxwellian shape is justified? You should be very skeptical, because dark matter is non-collisional. And Maxwell-Boltzmann distribution is the characteristic distribution for stuff which is collisional. And this was a puzzling result, because you saw this in simulation, something similar to that. And the closest one is to a sort of understanding of this behavior is due to Linenbell and the violent relaxation paradigm. Basically, it's not a true Maxwellian in the microscopic sense. It's just that in a coarse-grained sense, if you average over some little volumes in your phase space, since these objects like galaxies are formed by mergers, so rapid variations of potential, what happens is that you mix their phase space in a coarse-grained way in such a way that they more or less maximized their entropy, given the constraints of number density and energy. Of course, this is not exact. And you keep some memory of assembly history. This is an example of the type of distribution that you get in natural simulation. And this is also another source of uncertainty in these kind of experiments that you have. And yes. OK, sir. Sorry? This is one species, yes. OK. And nothing else. I mean, you will find some other way to compute this. If you want to compute analytically, there are some approximations, some symmetry conditions that you may do. And then the other thing I want to say, just to conclude, there are people looking for these things. There are experimentalists looking for these things. And how do they try to get rid of other noise that still dominates their stuff is to combine different techniques. So they try to combine light-heeled and recoil spectra or phonons and photons and so on and so forth in order to disentangle conventional stuff, like radiative noise and so on and so forth, from true dark matter. But this is a very challenging thing, because sometimes you think that, for example, a nucleus that you have been using for building your detector is stable and you end up discovering that it's a very long-lived radiative species. And so what you are measuring is the lifetime with a very expensive experiment of a bismot isotope that you thought was stable. So don't be surprised that there are false alarms in direct detection, because somehow these people are working at the edge of what is known in terms of low-background techniques. And they have to face real problems. So I wrap it up. I just leave the summary of what we have learned today, hopefully.