 Thank you. Can you hear me? Yes. So first of all, I would like to congratulate the organizers, especially for the choice of the sequence of topics. Just this morning we heard about things that we know about and we see large scale structures, CMB, and so on. Then we moved to gravitational waves, which are things that at least we know about. And although we haven't detected them yet directly, we are pretty sure of what we are talking about. And now my task is to talk about astrophysical signals of dark matter. And I must admit, I'm not completely sure that we know what we are talking about. Neither that there should be any such signal. So hopefully you'll be patient and understand also the structure of the three lectures that I decided at the end. Basically, the plan is the following. Today, lecture one, I will try to focus on some basic notions. These are notions, say, of cosmology, of kinetic theory and thermodynamics that probably most of you are familiar about, but I will need some of these notions in the following. So maybe this will be a bit boring, but hopefully short. Then I will insist a bit on the evidence that we have about dark matter, astrophysical and cosmological evidence. So ideally the first lecture today is I'm talking about things we are pretty certain about. And then some basic properties that we can infer from astrophysics and cosmology. And more or less this will conclude today lecture. Then I will move on lecture two. The real issue as you will be aware soon is that actually in order to search for signals of dark matter, we should agree more or less on what we should be looking for. So this is one main difference with respect to the gravitational wave case. So it's unavoidable that if you want to go beyond these basic properties, you need some framework. So I will introduce the most common framework for dark matter, the thermal relic framework or sometimes the WIMP concept. And I'll discuss some implication of that. I will also discuss some generalization, something beyond this framework. So exception, let's call them, or alternatives very shortly. And just to mention that they exist. And I will also introduce some more advanced notions on the Boltzmann equation. I mean the full momentum dependent Boltzmann equation that for thermal WIMPs mostly is useful for computing cross sections and decay rates and so on and so forth. But in general for some alternatives are crucial in order to infer the momentum distribution of your particle and related quantities. And then in lecture two I will also spend some time on direct detection. I still consider that some astrophysical signal of dark matter. Again, not very advanced. What I tried to do was to have a quick look at your background, your research interests. So I think that there is only a minority of people that is really particle physics oriented and in model building for instance. So I decided to give you some notions on that so that next time you see a plot and exclusion plot or a search plot of direct detection, you know the physics behind it at least. And then in lecture three I will really devote to indirect dark matter. Here I should say mostly WIMPs although I will describe some exception to that searches. So I will talk about high energy probes of dark matter for example gamma rays, neutrinos, I will describe cosmic rays like positrons or electrons or anti-proteins, some notions on CMB searches and so on and so forth. So that's the plan and let's start with some notion. It will be hybrid so I will try to make some point on blackboard and use slides especially since it's unclear if you can read it and retain. In any case I will make the slides available so you don't have to worry if you miss some line. There will be references to textbook, to reviews, to research articles and there will be also in the slides some exercises that I propose you to check if the notions I've introduced are clear enough for you to have an operational hands on understanding of what I've been doing. So about these notions of cosmology that we need, I apologize to those of you who are, should be the majority that are familiar with them but I will need some notions on smooth cosmology in the early universe and basically what you should be aware of is the so-called whole big bank picture in the sense that if we track back what we know today in the early universe we expect to have a situation where the inhomogeneities were much less important and you had a hotter universe. So hot that in reality in the early universe it was a plasma and could be even a very hot plasma. So the basis of that is the classical cosmological evidence like the Hubble law, like the CMB, like the primordial nucleosynthesis. If you are really unfamiliar with those topics at a qualitative level at least please just interrupt me but I won't spend time on that. I will just need this concept. If you have no problem whatsoever with the fact that universe early on was hot and basically everything was well described by a smooth fluid or plasma or a sum, a combination of plasma then it's okay. The other idea I need from classical cosmology is the cosmological principle. So on large scales nowadays it's stated like, it's a statement of isotropy and homogeneity in a statistical sense on very large scales. In the early universe in fact this is supposed to be much more close to truth in both quantitatively and on scale dependence. And then what can I, okay, another notion that I will need is the fact that we can describe many properties of these plasmas of this combination of particles in the early universe in terms of a distribution function. Just like the ones you have probably seen in kinetic theory or statistical physics. This is a very delicate concept in fact because we are dealing with the system, the universe which is expanding. So by itself this is not a static system and it's not even a stationary system in fact and it's unclear to what extent you can use thermodynamics or even classical kinetic theory to describe it. So in principle in order to describe many physical phenomena in this system you need non-equilibrium tools in statistical physics or even in quantum field theory. But for most applications and basically all of the ones I'll treat today it's enough to invoke local thermodynamic equilibrium. So let me just sketch what I mean and in parallel you have the slides here. So what I mean is that the microscopic processes is changing momentum, is changing energy, is changing number of particles can have much faster rates than the expansion rate of the universe. So although you have different patches of units which are not in causal contact, basically if you start from close enough conditions you end up with the same kind of effective temperature shared in causally disconnected region. So locally you have a sort of equilibrium that is established and you know that the distribution function, so if you wish the occupation number in phase space is described in general in quantum statistics by a distribution which in general as I said should be dependent on time, on position and on momentum which is either the Fermi Dirac 1 plus or minus 1 plus for Fermi Dirac 1 minus for Bose and Stein. And although in this broader sense you can think of these two parameters mu and tau and the temperature as effective parameters which just enforce the maximum entropy configuration consistently with the energy in your system and the number of particles that are present in your system and in fact because of the cosmological principle the fact that we know that we cannot depend, properties of our system cannot depend on the position because of homogeneity and cannot depend on the direction because of isotropy. In fact this is only a function of time and energy or if you wish of the modulus of the momentum. This is a first concept I will need for the following. Then once you accept that there is a more rigorous justification of that in for example in Munkanov's textbook. Once you accept that you can deal with this distribution function. Basically for those of you who are not familiar at all what I mean by that is that if you integrate this function over a given volume and a given range in momentum you get the number of particles of that species having the momentum in that range and occupying that volume. So once you are familiar with that and you accept that you can use basic kinetic equilibrium notions for example particles that can freely exchange energy between each other they will share this parameter the temperature of your system or particles which are mutually related by particle number changing interaction well they have some conservation rule for the chemical potential for example if you have a reaction A plus B that goes in C plus D and this is at equilibrium which means in our context this is fast enough with respect to the expansion of the universe this implies that these parameters satisfy a relation like that mu C plus mu D and similarly for example if you have particles which are their own antiparticles like photons well they are characterized by vanishing mu and particles and antiparticles they have opposite chemical potential say for example E plus C minus you know that at some point they are in equilibrium through pair production and annihilation so since this has vanishing chemical potential it means that they must have opposite chemical potential and so on. So these are probably familiar from a basic kinetic theory but they cannot be applied at a local level in the early universe and because of the hypothesis of the cosmological principle you can extend globally although there was no necessarily causal contact among these patches. Another notion that is important is that for some calculation in cosmology you don't need to know the whole distribution function okay it's sufficient to have some partial information again this is something you are probably familiar with from gas dynamics or fluid dynamics it's sufficient to deal with some moments of your complete distribution function and again you can define them in a relativistic framework for instance you can define the generalization of particle number and current and mu just by defining okay in general you have the number of internal degrees of freedom over particle and then the integral over the phase space of your distribution function times what well first momentum so P mu over P zero P mu is the for momentum of course because of the because of the sorry this should be there because of the cosmological principle only in the in the frame of the the CMB only the zero component of these four vector is non vanishing so at the end this will be basically for the I component this will be n I equal zero and for the zero component this will just give you G integral over D cube P two pi cube F P zero over P zero so this is the conventional number density of your particle okay and then identities you are probably familiar from with from from GR for example Bianchi identity which loosely speaking for those of you who are not familiar with this sort of conservation of energy where this is the stress energy tensor well it will translate in terms of this quantity we have just define well sorry this is first moment so I shouldn't be speaking about that yet there is a similar relation a similar conservation law for new mu and later on I will describe the Bianchi identity so this is equivalent to a conservation law for for for n so basically it's it translates in in the derivative of n zero times a cube in in a free maronimetric or over some work on universe equal zero which means that n scales like e to the a to the minus cube where this is the scale factor anything of these surprises you or looks exotic or exotic and the same can be done for the second moment of the distribution so I can erase here for instance you are familiar I hope with Einstein's equation or you have seen them and you know that conventionally at the right hand side you have the stress energy tensor there right so how it is related to these strange beast I've introduced well it's nothing but the second moment of it okay so this is the second moment of the strange beast I've introduced well it's nothing but the second moment of it okay it's nothing but basically t mu nu is nothing but the integral times the number of degrees of freedom assume to be equally populated for simplicity f p mu p nu over p zero okay and so it's enough for Einstein's equation to have just the second moment of this distribution in order to describe gravitational dynamics okay so for instance the the zero zero term of that will give you p zero square over p zero so this is nothing but integral v cube p over two pi cube f times what times p zero which is the energy times g so this is nothing else but the energy density of your of your system and this is the zero zero component of course zero high component vanishes because of the cosmological principle and then you have a high high component which cannot depend on I being one two or three so all of three are similar are equal in fact and so this can be also be written in terms of a scalar quantity times the the Kronecker symbol so this is called the pressure and so this will be given by integral g d cube p cube of what of modulus of p square three times f okay maybe since you not necessarily are seeing what what I'm showing I rewrite them here here it is is it clear and then just a trick maybe this looks too abstract so you may wonder I am dealing with some cosmological situation where there are some processes that interest me could be for example weak physics or could be electromagnetic processes or what else and am I close or far away from these local thermodynamic equilibrium where I can apply this machinery as it is well the simplest trick to to estimate to answer this question is to estimate the the the relevant interaction rate with respect to the to the expansion history of the universe the expansion rate of the universe okay so in general the the evolution time the evolving dynamical scale for our universe is the Hubble rate at that epoch of course so this is in general dependent on time or redshift if you wish or in terms of temperature these are equivalent way to describe this this dynamical quantity and you can compare it with the rate the particle physics rate which is of interest to you so the rate of a process is can be estimated by the number density of the target of your particles say the rate of recombination of an electron with a proton to give you hydrogen this is nothing but can be estimated by the number density of the target protons times the cross section for the phenomenon times the relative velocity of the two and most of the physics of interest at least from the astro particle point of view in the early universe is related to the to the to the condition where this situation is verified so in general what happens is that the rate of particles sorry not this one but the relation gamma equal H so the typical situation is that the dependence on time or on temperature or any or redshift of the right hand side and the left hand side is different so the most frequently found situation is the situation which in the early universe this gamma may be very high and the Hubble rate is very low and so basically you are in a quasi static situation and you can deal with it with standard statistical physics and thermodynamical tools however at some point these evolves faster than the other one so at some point it will decrease below the level of age and you will end up in a situation where you start with gamma much larger than H it evolves through a situation where at the end gamma will be much smaller than H and at some point it will match gamma of the order of H okay this situation is basically a decouple situation your particle do not interact anymore so you don't really care about the micro physics of it they evolve independently here it's called the freeze out okay so whatever kind of reaction was going on could have been annihilation of particles or could have been I don't know some equilibrium between different species is not valid anymore and these describe some departure from equilibrium and in general it's something interesting so just to make this discussion a little bit more specific let me mention a couple of examples one example you are right certainly familiar with is the formation of C and B as recombination so you can roughly estimate the time or the temperature at which this happens by quoting the rate for the process of E plus P giving gamma photon plus hydrogen and the quoting with the Hubble rate and you end up with with a typical temperature of the order of the electron volt or in the earlier universe for example you may estimate the temperature or the time at which the reaction of proton to neutron fusion into deuterium plus gamma decouples from equilibrium so for which these react these these equalities roughly satisfied and you end up with an estimate temperature of 0.1 MeV and the same might be done for other things for example the I don't know proton to neutron interconversion via neutrinos due to weak interaction okay by the way in both these cases I mean I leave them as an exercise this is a very useful exercise try to do it in terms of order of magnitude first there is something that you can do if you do not do it in order of magnitude but you carefully plug in the cross section and all and all the all the numbers it's to realize that here the temperature of relevance in both cases is well below the binding energy of your of your species so the binding energy of deuterium is roughly 2.2 MeV and here you have a relevant temperature which happens at roughly 0.1 MeV and here the the binding energy is is one order of magnitude higher right the ionization potential of the hydrogen and again you have roughly electron volt temperature and you will immediately realize that this this is quite common and it's related to the high entropy of the universe in fact so this I leave as an exercise if you have problems you can come to me and we we can go through it together so sorry for this more technical introduction I think I can skip a bit on the technicalities for the the entropy a couple of things I wanted to say is that again the analogous of this react this condition here for the second moment is nothing but the the Bianchi identity and in fact this is the the other Friedman equation the second Friedman equation which is if you wish the conservation of energy okay and once you plug in once you plug in a specific expression of the density of energy and the pressure in terms of of temperature well this gives you a time temperature relation so this is why I was using before equivalently the time or the redshift or the the temperature as equivalent variables in my in my system so is this familiar to you have you ever seen the Friedman question before I guess so so another thing that you can easily work out is the expression for the the explicit expression for this for example for the number density of particles or for the energy density of particles that is reported here by plugging in the the distribution there the Bose Einstein or the Fermi-Birac or even the Maxwell-Boltzmann limiting behavior of it and you immediately realize that at equilibrium I can use that that expression okay so in the relativistic regime I will have very simple expression well yes both in the relativistic regime and in the non-relativistic regime I will have some analytical expression for the for this basic thermodynamic quantities for example the number density is nothing but the temperature cube time and numerical factor you immediately see that right let me just show explicitly from this expression here here this can be written as g times integral dp 4 pi p square 2 pi cube then you have 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 11 over e to the e over t I assume zero chemical potential just for simplicity plus or minus one okay and then immediately for a relativistic species this is basically the same thing okay so you have a factor p cube I can I wrote it that way because I know that it's isotropic okay so at the end I will have something which I can write as t cube time and numerical integral and the numbers there are just the result of this numerical integral I am using natural units throughout but if you have problems we can restate them the all the c's and h bar but it will take the rest of the lecture probably so this numerical factor is for example one for Bose-Einstein distribution just like the one of photons for example three fourth for the fermionic degrees of freedom and the same you can do for the energy density of course in the case of energy density you have an extra factor of e so in order to make the integral dimension less you have to multiply and divide by another factor of t and that's what gives you the t to the fourth okay times another number that you can compute for those of you unfamiliar this is the Riemann function and the specific value is roughly 1.2 here okay but it's not important it's just a number now what does it mean in terms of these thermodynamic quantities for example the energy density or the number density expressed in terms of temperature notions like the conservation of the number of particles again for consistency you know what you should expect the number of particles in a system which does nothing but evolve with the expansion of the universe should scale like the volume okay and you do find that okay and in fact in order to preserve this property it's equivalent to say that the temperature cube times a cube times the volume must be a constant okay because in a commuting volume of the universe if nothing happens the number of particles must be preserved which is equivalent to say that the temperature of a relativistic couple of species should scale as 1 over a so a is the expansion the scale factor of the universe so it's directly related to the to the red shift so in fact it means that for example the c and b temperature can be used interchangeably as the scale factor of the universe or if you wish can be used as a time variable or as a clock variable for the particle physics processes at different at different epochs so these are quite trivial notions but if you have never seen them before might be quite unusual to deal with temperature like time or but it's more practical in terms of particle physics notions to deal with temperatures which describe typical energies available in your system than with time okay so and that's it I mean the same thing that I wrote for the relativistic case you can write for a non-relativistic species here in the non-relativistic limit what happens is that in practice you can forget about this part because this one is more relevant and you recover the most the Boltzmann the Boltzmann distribution and as a result of the integration for example the number density of your particle will assume a slightly more complicated form but still you can write down in an analytic term the density of the species is nothing but the energy density of a species is nothing but the mass of that species time the number density which makes sense if you are in the non-relativistic limit most of the energy of your particle is in its rest mass it's not in its kinetic energy and of course the pressure for a non-relativistic species is very small remember the perfect gas theory right so this is nothing but the temperature and the kinetic energy of your system is completely negligible with respect to the to the mass the rest mass so far so good you can generalize to other notions that you have seen in basic physics for example the entropy you can write down a density of entropy and an associated current I won't prove this but if you have seen Boltzmann expression for the for the entropy well you you will recognize this this factor and the only thing you have to to keep in mind is that this entropy basically scales as the density plus pressure over temperature okay this will be will be used in practice you can check this relation by plugging in the distribution function that I just wrote down in the expression for the entropy and and work out the calculation the integral but the only notion that probably will be using in the rest of this on this lecture is that since it evolves as rho plus p over over t and since the entropy is dominated by the relativistic species present it's not surprising that the entropy density goes as t cube why because rho plus p in the relativistic limit go go both as t to the fourth p is one-third of raw for their relativistic gas equation of state so this is nothing but four-thirds rho over t rho goes to the t to the fourth so this is t to the cube and there is just a numerical factor in front which is the effective number of degrees of freedom entering the entropy expression which for a superposition of gas in principle each one at different temperatures for example you may think of neutrinos and photons is nothing but the sum of some algebraic expression cubed in the ratio of temperature of this of that gas with respect to the temperature of reference which we assume to be the CMV one and then there is a statistics factor of seven eight for the four fermions okay that's not important but the important thing is that this is something cubic in in temperature and the other thing is that at the equilibrium this pre-factor can be computed in terms of the the species populating your plasma okay nothing more these are just factors that you can check to be correct but nothing nothing very fancy and till now I never mentioned dark matter but these notions will be useful today a bit and tomorrow in particular to do some real calculation so the final thing that I want to mention is that these function age effective can be generalized also for the energy density of the universe the total energy density of the universe it's not surprising that the effective number of degrees of freedom entering the energy density now it's weighted through the fourth power of the temperature just because the energy density of relativistic species is weighted like the fourth power of the goes as the fourth power of the temperature and just to give you an idea of the on the numerical value of these well it depends on the temperature of the universe right so it depends on the number of degrees of freedom of your plasma and at very low temperature is just just around 11 or so but if you go up back in the history of the universe you have more and more degrees of freedom you may grow through even all the mess of the hadronic phase with lots of mesons and addrons and so on so this can reach much higher values and another thing is that the ratio of the this effective degrees of freedom and in the entropy and in the in the energy density is almost constant it's not exactly constant because again when when particles get out of equilibrium for example when all these mesons and hadrons annihilate when the temperature becomes too low well they are weighted slightly different slightly differently for the density of entropy and and and then and energy and so they they just depart from a little bit from from this unity okay now apart for the case where you work in the in this field and you need this notion to do detailed calculation you will never need this kind of precision but be aware that if you want to perform percent level accuracy calculation you need to take into account these phenomena by the way this is not even the correct formulation because all these approximation assume that only relativistic stuff matters in reality you have some weight of non relativistic component in the in the plasma and for some very precision calculation like primordial nucleosynthesis one you have to take into account that okay but this this was just so that everybody's at the same level in terms of basic notions if there are questions I may move yes please yes it is and the reason why this is because the number of stuff that is in universe decreases with temperature so the in a certain sense from a particle physics point of view the early universe which was a much more interesting place than than current universe for example you had electrons and positrons in equilibrium with photons with a neutrinous and all the rest in the earth universe but now there is no more electrons and positrons and the reason is that once the temperature drop sufficiently below the mass scale of the electron positrons what happens is that I can I can erase this one perhaps what happens is that reactions like E plus in minus going into gamma gamma are way more favored with respect to reactions of gamma gamma going into E plus C minus why because the typical energy of these gammas is well below the level of the mass of the electron so they don't have enough kinetic energy to produce them and so as a result these stuff goes into photons so to speak so the number of species changes of course there is some conservation low that you can account for and this is the whole trick is just to account for for this bookkeeping of species that have disappeared from the plasma but the number of species in the plasma does decrease of course this depends on the the properties of the world we live in but other questions okay sure I got confused first time I saw this thing so I hope it means that you have seen them before don't be afraid to ask for example maybe maybe I can I can do an example of what what it is the the age effective okay so assume I want to write the energy density of entropy right now there are some pre-factors the pi to the fourth blah blah blah that I can write down but then the real bulk of the information is this age effective times the C and B temperature cube right what is the age effective I told you that only the relativistic species matter okay so the only relativistic species right now are photons and neutrinos maybe or part of them let's assume that neutrinos are must less for the for a second so age effective right now would be written as 2 g of the photons times t of the photons over t of photons cube so that's one plus what plus the number of degrees of freedom in a neutrino which is to neutrino anti-neutrino and there and there is no additional component okay there is no right-handed neutrino thermalized at least times the number of neutrinos that you have three species times some statistical factor times the ratio of the temperature in neutrinos to the temperature of photons to the cube and this number turns out to be 411 for for reasons that you could calculate with the tools I've provided okay and it's just entropy conservation and this is roughly 3.91 so immediately by this simple simple bookkeeping you can estimate what's the entropy density in the current universe it's not exact because we didn't take into account of variance and all the rest but in reality the entropy of the universe is dominated by the the relativistic species so I think it's time for dark matter to enter the scene it's it's just it's just a way to write down very simply thermodynamic quantities like the energy density of the plasma overall energy density in the species in the universe at a given epoch nothing nothing more than that if you are willing to write down explicitly the integrals corresponding to those definitions no need to introduce them okay it's just a short-hand notation you shouldn't think of it as anything fundamental it's just to simplify the calculation so I promise that I will describe some evidence for dark matter coming from astrophysics and cosmology and I cannot avoid of starting with the traditional evidence coming from one of my heroes Sviki so Sviki was a very productive let's say physicist and astrophysicist and he came up with a lots of very smart ideas among other things he discovered dark matter he discovered basically the the supernova stuff it is he made the association of supernovae with the sources of cosmic rays in fact and and he was not very well appreciated by the community so for for the rest of the talk what I what I really care about Sviki is these two big discoveries one is that astronomers are spherical bastards which means that no matter how you look at them they are just bastards and a slightly more interesting one for the rest of the lecture is that inferred a mismatch in the mass of the coma cluster this is a cluster of galaxy with respect to the mass he could infer from purely photometric arguments so it just counted the the number of galaxies he saw in this cluster he knew how many stars are in a galaxy I'm simplifying a bit but that's the logic so he could estimate from the luminosity of these objects how many variants they should be there now we would say and then he did something very smart which was quite pioneering for the epoch he used a mechanics theorem the virial theorem applying it to a cosmological object to an astrophysical object like the cluster of galaxies to have an independent estimator for the mass of this object and he found a mismatch okay so this is the argument I probably I don't need to repeat you the the virial theorem but the argument is the following if you have a system of bound objects so I so-called virial I system subject to conservative forces well I can erase this one if the the the conservative forcing into we are talking about is the the gravitational one well you have a relation that says that the twice the total energy the kinetic energy of your system plus the potential energy is equal to zero there is here a pre-factor that depends on the nature of the conservative force involved but for us it's it's fine so you can estimate immediately the kinetic energy of your system let's assume that this is made of n objects of the same mass and of course you can generalize it so twice the kinetic energy is nothing but n times m v square average and then here you have what you have the basically roughly this is not exactly roughly in the limit where n is large the number of pairs is n square over 2 times g Newton times n the mass over the average distance and no this is quadratic okay this is zero and from that you can estimate the mass of the cluster which is roughly n times m okay so you isolate that these and these simplify these and these simplify oh sorry I wrote it twice these and these simplify so at the end you have that n times m is equal to twice average r this is the intergalactic distance v square over g Newton now what you could estimate was the typical spread of the velocity square from a spectroscopic arguments and then you could also estimate the from angular consideration and some distance estimator you could estimate r so basically you could get an estimate of the total mass of these we realized object and he found that you know maybe he did some mistake but he found some 400 factor difference between this method and the photometric method just counting the number of luminous objects and summing up okay so this was the first hint that there was something wrong or something missing in the account of the the matter budget and of course he had made some mistake of in the estimate some mistakes he could have avoided some others were just inherent to the knowledge he had at the time but nowadays you still study clusters to have evidence for for dark matter in particular you have much more advanced tools you can use x-rays you can use lensing for instance so why you can use x-rays galaxy clusters are very deep potential wells so being realized objects it means that the kinetic and the kinetic energies associated with these systems can be quite high typically high enough for the electrons to radiate in by bramstrahlung okay so in the x-ray band and so you have x-ray energy images of this of these objects and if you assume for instance hydrostatic equilibrium what I'm writing there is just Newton's law for a spherical system a continuum system well basically you can you you can use the the brightness so the number of photons x-ray photons coming to you as a proxy for the density and you can use the temperature profile of your cluster as a proxy for the pressure so you can basically solve this equation for the mass as a function of distance and again nowadays you get roughly a factor say seven more mass than dosing gas which which is inferred okay so in a certain sense what what these probes are telling you is that the potential well it's too deep compared to the stuff the visible stuff that you see again another instance of this missing mass problem till now there is no new physics nothing exotic involved right of course there is even more spectacular proofs through lensing you can try to reconstruct from these nice images you can try to reconstruct the distortion field and you can try to to fit it for example to a model where you just have concentration of masses corresponding to the galaxies these are these peaks here and you see a mismatch okay you really need some continuum stuff this kind of Gaussian that you see in order to account for the pattern of deflection and again this is another way to probe dark matter perhaps you have even more spectacular signals which are what I call segregation no racial meaning here is just that you have clusters of galaxies that just go through each other there is a collision and most of the mass of in these clusters is in fact is in the gas it's not in the stars or the galaxies but since you have very fast moving gas fronts this what happens is that this gas gets shocked it gets shocked and slows down and while the rest of the cluster goes through each other the cross section is very low now you can map through lensing for instance you can map where most of the mass is the distorting mass the gravitating masses and you can map through x-rays where most of the hot gas the shot gas is okay now if gravity in these objects is due to the stuff we know about the gas the lensing map should fall just on the top of the gas map okay if the gravity is due to something that doesn't care at all about the shot gas then a pattern should be different and the blue curves the blue spots here are the ones reconstructed through through lensing I think this is the train rack and this is the bullet cluster and the pinkish one is the shot material so you you you see this kind of segregation of the two components the gas the variants remain in the shocked part at the center and what gravitates the bulk of what gravitates goes through okay now this is yes actually this depends on them it's a very interesting question because it depends on modeling the system in a sense so you should have an idea of several things like like the velocity of these systems and also the geometry okay but in general these are these are rare objects that happen through I don't know how many of them they have been discovered but a handful of them but in order for I would say the interesting dynamic of quantities if you try to estimate how many such collision you should have seen in history of the universe probably you get the number like 10 or so so it this is this is not the most common the beast in the universe it's quite rare in fact I think at the beginning there were people that said in fact these objects were approved that lambda CDM was was in trouble right yes yes but so I wouldn't say that these are a typical beast in a cold dark matter plus lambda object but I think pictorially they are quite impressive okay yes what is out say it again well the hypothesis is that it started with it because we see isolated clusters and we see that the two centroids agree right but you you just see the distribution of for example the x-ray photons they just don't come from the region where where the the lensing potential comes now you I think I mean locally it's in equilibrium that if the question is that do they have the same temperature all across the answer is no I don't think it's fully thermalized up in fact you can isolate different subsystems which have different properties but and the potential is not the same here and here but for the for the qualitative point of the argument you don't care finally you have flat flat rotation curves which well not finally I will have a more interesting argument in favor of the systems of that matter you have flat rotation curves again the basic properties is well known if you write down Newton force for a circular orbit so you equate the gravitational force to the to the centripetal acceleration types mass what you observe is the rotational velocity is roughly constant but if you have a distributional mass which is a strongly picked toward the center the inner part of a galaxy well you should expect that the velocity square goes like 1 over r just like in a capillary system so instead you get this constant behavior this is one of the the plots of this behavior of rotational velocities versus distance and if you try to fit what kind of distributional mass gives you these these low roughly you have something that goes like 1 over r square at sufficiently large distances now this is historically very important thanks to the work of people like very Robin Rubin and others for reasons that I still have hard time to understand but after these observations people be started becoming convinced that dark matter is is for real honestly I still think this is one of the weakest points for for the convincing yourself that what you are seeing is is dark matter of non-baryonic form and whatnot in fact this is very important for phenomenology because for example if trying to determine these these these extra component in our galaxy trying to determine the local amount of these stuff and so on and so forth is very important for both direct detection strategies and indirect detection strategies but at the end is a result of a complicated process where you have to fit for the contribution of the gas the gas the stars the bulge and so on and so forth so this number is still affected by quite some uncertainty I think much more convincing is a point that was raised even this morning namely that the growth of structures would not be possible and even the pattern that you see in cmb would not be possible without that matter so you will have more advanced notions on structure formation even in this lecture so I will just summarize the key physical argument if you look at this picture you you this is this is a misleading picture because in reality what you are seeing are an isotropy said level of 10 to minus 5 or so on the top of a very uniform sky uniform brightness sky so the level of these fluctuations at the recombination time was really a level of 10 to minus 5 now you can do a calculation trying to estimate by today by gravitational instability if you just had variants and variants we know they interact with electrons they interact with photons so at the time of the recombination they should have shared the same level of fluctuations if you just let them evolve is there enough time to form the structures we see around us and the answer is no okay and of course unless relativities deeply float so the easiest explanation and I think it's quite beautiful explanation is that what happens is that there is something decoupled from photons that somehow has deeper potential wells and as soon as variants get freed of their their spring with with photons actually they can fold in this deeper potential well and evolve since so in slightly more technical terms but still only pictorial you can study for example the evolution in time of a mode so you just write the density contrast divide the density time by the average density minus one you write in a Fourier transform or series in this case for discrete modes and then in a linear level these evolve independently each mode it evolves independently and what I'm showing here is in arbitrary units the evolution versus time if you wish or one plus z to the minus one forget what happens very early on once the mode enters into causal contact what happens I'm showing here the variance in green the photons in red and the cold dark matter in blue is that cold dark matter starts evolving first logarithmically in fact in the radiation era this is sometimes known as mesaros effect and then linearly instead the variants actually they are coupled with this mode is coupled of course with with photons and this is nothing but a manifestation that you have sound in your in your plasma but as soon as the the photons and the and the variance decoupled this is the recombination epoch the truth is that the density contrast of dark matter is way way larger than the density contrast they should have had if there was no dark matter around and so variance immediately start tracking dark matter so the fact that we have structures of the level we have today doesn't seem compatible with this picture and only variance present around okay so in fact you can you can compute what the pattern should have been of the power spectrum which is the variance of this mode and this is the result this red curve and the black points is what has been observed by service so you see that there is a mismatch both in terms of amplitude and clearly in terms of shape okay so to me this is one of perhaps the most convincing argument that there is dark matter around and the answer and you might say okay but maybe gravity is wrong okay I should modify dramatically gravity correct but you still have to deal with these bumps and you don't see these bumps there and some people have been arguing yeah but maybe there is some issue with observations and blah blah blah the truth is that you do see the bumps these are called baryon acoustic oscillation so we are capable of seeing the sub leading bumps due to the baryonic fraction I doubt that we should have missed factor of several bigger bumps just because we did something wrong in the in the sampling strategies or window function and whatnot so in reality even if you modify gravity and there have been proposals to modify gravity like Teves and so on it's much easier so to speak to boost this than to change the shape which is not surprising physically it's much easier to have additional effects on the top of gravity that we know but to undo electromagnetism of baryons this is very hard you should have something that cancelled the electromagnetic interaction of of baryons with photons okay so to me this is very very convincing I also proposed in the rest and I leave it in the slides some little exercise if you haven't done it or if you won't do it in the next days to play with basically even Newtonian physics Euler equation continuity equation Poisson equation and convince yourself that what I've just told you qualitatively can be done more quantitatively for example consider as mood background with this equation linearized equation for small perturbation about this mood the smooth solution do this in Fourier space and then generalized to to multi fluids ask yourself which kind of perturbation grow and when what happens in radiation dominated epoch and in matter dominated epoch and you will find some critical quantity most likely you have seen this before and you will see it again but if you haven't done that yet please go through it it's a very instructive exercise you find it done on any cosmology textbook or notes or whatever but this convinces you that what I've told can be done in a much more quantitative way and then let me mention that the same problem that we have with large-scale structures today you also have in the in the CMB anisotropy pattern itself okay if you do not plug in cold dark matter you are in trouble fitting the shape of the of the peaks and this is relatively new in sense that a decade ago you could still try to given the quality of the data that you had you could still try to fit with alternative models okay this is one example I take from a paper by score this et al in reality already a few years ago this this game was completely hopeless just to give you an example these are data coming I think from a W map 7 the data the black line would be the updated version of these Mondian modified gravity fits and and and the dotted line is the lambda CDM with cold dark matter clearly you have a trouble especially fitting this third peak on the top of that you are not really free to alter too much the variant abundance in alternative models of of CMB formation pattern formation because you have another independent measurement of the amount of variants around that comes from primordial nucleosynthesis and they match very well so in reality you have a consistent picture in cosmology in lambda CDM and you don't as far as I know in any other model so why is this cosmological evidence important for for particle physics now it's based on linear solution or even exact solution or smooth component so you don't have to deal with many astrophysical uncertainties okay and it suggests that you need some additional species that gravitates normally rather than known species that gravitate normally and then because of arguments like this match between BBN and CMB and so on and so forth it does suggest you that you need some dark matter which is not known is some form which is unknown it doesn't couple with photons the way variants do so it's hard to conceive that it can be hidden in say collapsed objects like Jupiters or whatever you can think of golf balls or waste in in space and whatnot why because at that time there was no realized objects there was no collapsed objects and so we have probes of these linear regime that tells us that there is this mismatch there is one option left which is really black holes so the population of black holes with of primordial origin by the way could in principle explain dark matter actually current bounds put these under strong tension so you have really to arrange for it in a peculiar way and in general you have to play a lot with your inflationary model to get to get a distribution of black holes that evades current bounds and still if you have to play with multiple fields in inflation and so on so forth at the end you need new physics so it's just trading one evidence for new physics for another one but there is no obvious way to have primordial black holes of the right properties and by the way this is a testable this is a testable statement so in probably in 10 years from now we will need we will know if it's black hole or not so the only possible standard model candidate in terms of basic properties which is for example stable on cosmological timescales which is not electromagnetic charge and so on are neutrinos now as David Weinberg told us this morning in the 80s massive standard model neutrinos where a proper candidate for dark matter they were a serious candidate for dark matter but they do not work and again why they do not work this is some recent advance but let me just pause one second to say that the fact that nothing we know about in a sense works is what gets many particle physics excited because as far as we know when we studied data from Tevedron LHC lap or precision experiments everything works very well so this is one of the few cases where it seems that we need some ingredient that we have no clue about okay so and this is why it's one of the signals of physics beyond the standard model of particle physics plus gravity that we know and don't be surprised that many people work on it okay so one word on why neutrinos do not work as dark matter of course I told you the dark matter must be massive otherwise it wouldn't behave as you know variants without charge in a sense and in fact we know that neutrinos are massive we have measured transition from neutrinos of one flavor into another flavor neutrinosilations that require neutrinos to have masses at least two states must be massive and this is fulfilled so one thing on scorecard is there however quantitatively the level of mass they have is not enough to match the level of extra mass that we need to account for dark matter and by the way this is a calculation we will do on lecture 2 and just to say that the mismatch is of a factor of several and way too big to be accounted for by statistical errors there is a slightly more I would say deeper reasons why neutrinos do not work and they are not the right kind of dark matter in order to explain the pattern of structures that we have okay and again this was mentioned this morning perhaps you will see something more on that in perturbation theory for large scale structures if not I will show you a movie at the end to show you that really the two things do not look like do not look the same so if I had to condense one important number that we have come about in studying these probes it's the same number that I've in Weinberg showed you this morning in his list of fundamental questions to address is to explain why you have this amount of cold dark matter this is nothing but the energy density of these additional species in units of the critical energy density of the universe so you would like to explain this number now you can from a theory's point of view it's better to rewrite this number in terms of something that we can compute from first principle so I told you this is nothing but the ratio of the density energy density of dark matter over the critical energy density times h squared the reduced Hubble constant square now the energy density for a non relativistic particle is nothing but the mass of this particle time it's number density okay now the number density of the particle I can rewrite as the entropy density okay they scale the same remember t cube so this is fine it's just a numerical factor between the two I could have used these as variable it's just historical that people try tend to work with the with the number density of particle over the entropy density okay so once you plug in numbers I can write down this s remember is just numbers times this this factor that I can compute times t cube of the c and b so the only unknown in my omega x is the mass of my particle and these abundance dimensionless abundance normalized to the total entropy density okay so I must find a combination of mass and abundance such that this number equates what is observed and this we will do in lecture two for a few cases it's not the only goal of dark matter theories but it's one goal I mean dark matter should at least account for dark matter observed which looks fair so just a few properties that you can deduce from observations okay from observation here I mean really cosmological and astrophysical observations not involving a priori anything but gravity so a few years ago I had to to teach some basic notions for dark matter physics and I realized that I didn't talk at all about this and I realized that the audience was unaware of these points and I tried to track back this this issue because to me this was like lecture one in dark matter physics and the reason is that they are trivial now so trivial that nobody talks about them anymore and but I think it's useful that you see it at least once the first property is that I will grow through them okay the first property is that dark matter is dark which looks surprising perhaps and this dissipation less which means that it cannot cool for example by emission of photons or analogous to that now why I'm insisting on that because you do find dark photons you do find collisional dark matter and so be careful you should always fulfill some basic requirements dark matter should be dark and should not cool at least not cooling too fast otherwise you are in trouble with some observation with kind of observation for example dark matter forms something which is consistent with triaxial hellos it doesn't form discs now variants have the possibility to cool and they form discs if dark matter had dark photons of the same type of the baryonic counterpart that we know about the same would happen for dark matter but this is inconsistent with gravitational potential shape that we know about okay so even if they have they must be quite different from the baryonic counterpart it's not just a copy of variants okay and it's also collisionless at least collisionless with respect to the baryonic counterpart and you have a number of arguments entering into that not all of them have the same degree of robustness but for instance from objects like the bullet cluster you can set a limit on the collisionality of dark matter if they were very collisionless they would not pass through each other they would just stop like the gas does okay roughly speaking these bounds okay some bounds depend on some assumed velocity dependence of the cross-section but just to give you some ballpark number are of the order of the barn over gv which for particle physics stanzas is quite high however if you compare with typical cross-section you find in atomic physics in molecular physics which are three four orders of money do bigger than that it's quite small okay again dark matter is not a copy of the baryonic world it should be quite far away from it and you get new bounds on did there is a whole industry now getting bounds on dark matter collisionality and the cross-section and there are sometimes claims that maybe there is some effects that looks like collisional dark matter and so on and so forth but again even if at some point each one of these properties that I'm listing here it's darkness it's you know non-collisional nature etc at some point might become a detection of a property right however the message to keep in mind is that even if dark matter is collisional at some point it's far enough from the baryonic counterpart properties that it requires some qualitatively different explanation okay so once I realized that I felt much more relaxed because I don't know about you but when I was a child I loved dinosaurs right everybody loves dinosaurs and I couldn't bear that the thought you know that dark matter had anything to do with killing the dinosaurs so don't worry okay dark matter did not kill the dinosaurs and so when you see papers appearing read the fine print please so maybe there is a fraction of the dark matter which is made of something that looks like variants when I say a fraction maybe means 1% of it something like that and then maybe that fraction forms disks even thinner than baryonic disks and so on and so forth and maybe this triggers catastrophic events in over geological timescales and if you're happy to call dark matter 1 per mil 10 to the minus 4 even 10 to the minus 2 or dark matter fine but in a certain sense we already have discovered dark matter of that sort which are neutrinos neutrinos do contribute to dark matter and do contribute at that level right so if you say that dark matter is responsible for the for the killing the dinosaurs is more or less like saying that neutrinos are responsible for large scale structures another properties that dark matter is smoothly distributed what I mean is that it's not really made of some discrete big chunks of stuff and that you can put limits on the size or the mass of these objects from a number of astrophysical observables so again these limits do not look very impressive from a particle physics point of view I mean the the mark that that matter mass should not exceed say tens of solar masses which is a quite a big particle but still they are non-trivial on astrophysical scales and in fact there have been searches for for effects induced from lensing effects micro lensing effects which are enhancements of luminosity transients due to the fact that between you and an observed star there is some lens so if the if this star passes through your field of view same for the lens basically you may get some magnification as a function of time and this magnification as a peculiar shape it's so-called patching shape so you may look for these events and in fact towards the Magellanic cloud as it has been done and in fact they have been found and you may look for the frequency of these events and try to estimate the the fraction of dark massive compact objects of say stellar size or planetary size that are there and basically the results is here you exclude everything for example the heroes survey has excluded everything which is above this this curve this is a curve of the fraction of dark matter that can be in objects of this given mass decayed just to give you this is in solar masses so this is roughly solar mass this is a few solar masses and basically you see that for objects up to say 10 solar masses and this bound goes to 10 to 26 gram which I think it's like 10 to the minus 6 or 7 solar masses well it excludes that they can constitute the bulk of dark matter so anything which is more or less between say hurt size and solar size or 10 times so the Sun is excluded as being the dark matter it's also true that dark matter is classical on galactic scales what do I mean is that it's not a quantum delocalized phenomenon at least at scales up to the kilo parsec which through basic quantum mechanics also imposes a there is a question sure how do you know this is not because of dark matter this is because of we have we make the hypothesis that we do not want to modify Einstein gravity and then you have the equivalence principle of course you could have models where both you have dark matter and you modify gravity so just because of quantum mechanics and the fact that we know that it is localized over kilo parsec size you get a bound a remarkable bound that the particles constituted dark matter must be heavier than 10 to the minus 22 electron volt so they must be lighter than 10 to 2 say minus 7 solar masses and must be heavier than 10 to minus 22 electron volt this is just how clever theorists are right to narrow down the search and for fermions however the the bounds is much stronger and the reason is just related to the power exclusion principle just this distribution there well you see that there is a plus one here so basically you can prove that it's equivalent to say that you cannot have pack as many fermions as you want in the same state and you cannot have the equivalent of bosonstein condensation which means that in phase space you have a bound and in fact you can also prove then once you coarse grain your phase space volume the distribution function in a coarse grain sense in an average sense over bigger cells still has to fulfill this bound okay so even if you do not know how the particle phase space density of dark matter evolves in time after its production you know that it cannot exceed these primordial values set by the Fermi Dirac equilibrium distribution that you can compute safely and roughly this tells you that this is known as three main gun bound and the modern version gives you a lower limit of around 0.4 kV for the mass of dark matter and as I told you dark matter is not hot and even this morning there was some historical remark about that it means that you cannot have a relativistic velocity distribution this is equivalent to say it's it must be sufficiently cold okay and in fact this is the most profound reason why neutrinos are not good dark matter candidates and the reason is that they have decoupled when they were hot enough so in a sense they are they have a kinetic energy which is sufficiently big to make them flying through the potential wells of say variants so they do not sink into sufficiently small virilized objects at least not early enough and that make them very very bad in in generating the pattern of structures that we see so maybe I can show you one movie this movie shows the evolution of structures in a cold dark matter universe and in a universe which is hot dark matter and you see clearly that this is much richer in small scale structure than this one this is like a smoothed version of that and we do see small scale structure so neutrinos cannot make dark matter a good dark matter candidate because of this property but we know more any particle that has a velocity distribution which is semi relativistic is not a good dark matter candidate because it would produce something very very similar okay so it's not the right if you wish it's not the right power spectrum for instance and these are quite massive neutrino by the way at the edge of being excluded by direct by direct searches so I will just summarize there is a number of observation that all conspired to tell us that there is some the need for some additional degree of freedom that seems to gravitate normally but maybe there are exotics even in the gravity sector but must have suppressed coupling suppressed coupling with respect to electromagnetic ones to strong interactions and it seems that this requires some new physics beyond the standard model that we know about and this explains why people are so excited about dark matter in the particle physics community unfortunately gravity is universal so gravity tells us some of its properties but cannot tell us the whole story in sense if you want to identify the particle physics behind it you need to go through some interaction that discriminates among different classes of particles okay so why do we call neutrinos neutrinos we call them neutrinos as opposed to the charge leptons because they do not undergo electric charge interaction so we need something similar for dark matter however what I told you today is more or less as far as you can go without building a real theory of it if you want to search for something you don't know about you need a framework within which to search to begin to search okay so all the rest somehow depends on a theoretical bias and in lecture two I will define some theoretical context in which this search can be performed I will start explaining you the kind of search that you can do direct searches and later on indirect searches however please decouple the evidence for dark matter the need for dark matter for the specific validation of a theoretical framework if Suze is not found at LHC it doesn't tell us much about dark matter it doesn't invalidate the need for dark matter if you do not find extra dimension the same we still have to solve this problem we hope we will have to solve it with some input from colliders but maybe we only have we will only have access to partial information okay so I think I can stop here and maybe there are questions I can answer so my question was your you assert that dark matter is smooth but when we are looking at the Milky Way halo we see this dwarf galaxies so maybe they are they have the host of sub halo sub structures and so it seems that the dark matter is clumpy where are you oh sorry I did the mistake no it's wait it's a matter of scale way I need the pointer right it's a matter of scale I just used an order of magnitude argument which is kiloparsec so roughly you cannot have a delocalization on scales much larger than the kiloparsec otherwise for example you wouldn't have and you couldn't attribute really a dark matter content with single dwarf because it would be you know a delocalized objects over say 10 kiloparsec size this is what I'm imposing here of course you might be even stronger than that it might be that if you have evidence for a profile in a dwarf maybe this kiloparsec maybe point one kiloparsec but it won't push much your mass so I think the argument I illustrated is perfectly consistent with what you what we observe in works actually it's even bit loser if you understand what you're saying you are saying that perhaps we need to push even one order of magnitude below that in the sense that we need a structure below the dwarf side I mean that we have this dark matter clumps but they are not compact that's why we don't see this microlensing effect wait wait wait I didn't say that you only have as moot distribution of dark matter I just said it cannot be in a certain sense quantum mechanically you have localized it almost overscale of that size at least it can clump on scales of 10 parsec there is no problem with that I'm just saying this come this gives you a bound then I don't know if it's clear other questions so very naive a question we have an infinite range of frequencies to observe and we have a reserve then all how can you be so sure that we are just not missing some particular band of frequency where the dark matter photons coming from that would be there that's why we we try to open new windows in astrophysics right I mean the answer is that we are looking at the best of our capabilities it's not only about photons right people have been building neutrino telescopes they are trying to build gravitational wave observatories and the reason is just what you just mentioned right the fact that we we think we are familiar with some sky or some wavelengths or some messengers doesn't mean that we understand universe in all wavelengths or all messengers but the answer is we don't know and the only reason is to to go there and build something I've a questions could you say a little bit more about why primordial black holes have been excluded as a it's not a candidate it's a bit more complicated so I'll try to draw maybe a graph I should have reported it Mary so my claim is that they are not really being excluded but they are strongly disfavored unless you engineer for a model where they are not okay so the usual plot is is something like that here you have mass of your black hole and here you have some fraction of the possible contribution of this mass to the total dark matter content okay so this fractured when this is one if you had the delta D rack of this sort would mean that you have this black hole of that mass that make 100% of the dark matter okay now you have a number of observations that go from I mean a very large mass I think it goes the killing argument is spectral distortion in the CMB and a very small mass the killing argument is the extra galactic gamma ray background and the reason is that if the matter the black hole is too too too small it will evaporate because of hooking radiation over a lifetime which is smaller with respect to the lifetime of the universe so you should see a background of gamma rays in I mean you can compute what's the typical energy and so you have this this fraction here but in between there are like I don't remember but maybe 20 decades or so or maybe 15 or so and I think here the level should be maybe 100 100 solar masses or maybe I'm wrong thousand or so anyway so there is a number of observations there one piece here is these microlensing surveys and then you have other objects for example if you have micro this region is a bit difficult but let's say it's it's probed indirectly through astrophysics in the sense that if you in generally it's true catalysis so if a black hole sinks in a white dwarf or a neutron star what you will happen it will mutate it into a black hole it will trigger a fast evolution into a black hole and not all the not all these bounds are based on that but most of them are based on this kind of ideas and so the fact that you have observed in an objects with that density of dark matter that number of objects say neutron star etc puts an upper limit yeah an upper limit to the amount of dark matter that can be done reside in this kind of black holes now at the end of the day maybe if you engineer what happens in typical mechanism for primordial production of black holes is that you do not produce a spike right you produce them over some range of scales and even if you do not produce them over some range of scales the fact that they will accrete for example will broaden this distribution so in practice what happens is that you can leave with with something which is maybe at level of point one but over a very large very large meaning maybe several orders of magnitude in mass five six orders of money to mass or you may need something very spiky and which you must make sure does not touch either this or this one by evolutionary phenomena and I think there are a few models out there one appeared very recently by Juan Garciabilito example but what I mean is that in general either you write something like a spike randomly or you write something which is very spread out you tend to eat some of these bounce so you need to to fine-tune your theory to make something that works the good news is that the most promising as far as I know I'm not an expert at all on this topic but the most promising window is this one the one let's say between a virus spectral distortion and micro lensing searches say in the hundreds of solar masses hundred thousand solar mass range and the good news is that if you build a future experiment which can measure spectral distortion of CMB one to two orders of money to better which should be okay at least for pixie in in the United States you should be able to test this scenario but this this is what I mean when they are almost ruled out it's pre-existing models have been ruled out a posteriori models after all the searches can still be constructed if you play a bit there is one question there I hope it's on dinosaurs I just want to ask if there is a observational proof that the since dark matter is gold then it doesn't form discs and it's it's not because of a hotness it's because of the fact that it cannot really cool down so you have what we think we understand about the shape of this potential is that they are not pressure pressure supported they are supported by the dispersion the velocity dispersion of your dark matter system now if you had if you had any any way to cool down like photon emission what will happen is that you will just change the the the shape of your potential because the distribution dark matter would be geometrically different okay so one obvious way to see it is that imagine that you had a galactic disc like that and then you had a huge potential of dark matter like that you would have strong tidal effects there and there in fact you would not even form the galaxy the way you you see it and so yes you do have observational evidence that dark matter cannot be too far away from say triaxial or spherical almost spherically distributed over galaxies and if you could dissipate through emission of photons you would just basically full form discs because this is the configuration where you you cannot get rid of your angular momentum but you can still sink into this most stable orbit compatible with your angular momentum so this is a generic phenomenon if you have a cooling process yes you do form discs and it's not consistent with observations actually actually with it found some there is some claims that there is more dark matter in dwarf galaxies there is more dark matter in dwarf galaxies that's that's that's that's that's what you're saying so the no no no the wait let's agree on term the the ratio of mass to luminosity is higher yes galaxies than in ordinary galaxies in this is but this is not I mean this goes was because dwarf galaxies aren't so much as disc this this this also goes into this direction I'm not sure I mean this has more to do the way I understand it this has more to do with the in fact this is a common trend that the mass to luminosity ratio or luminosity to mass ratio is not the same for objects of different type of different mass but this this is not related this is more associated to the history of these objects they're there the way they form and the way they evolved rather than fundamental properties of dark matter as far as I know for example dwarf galaxies we do not think that these ratio is due to some strange property of dark matter as far as I know the the standard explanation is that the the potential well is so shallow that basically as soon as you have baryonic effects like stellar explosion so on you get rid of most of the gas and so at the end even if they are born with the same ratio of variance to dark matter even if you would soon end up in a region in a situation where you are dominated by dark matter in fact we think that most of the halos are really dark they do not host any visible dark sorry barion stars or gas and so on but this has more to do with bias than to to fundamental dark matter properties as far as I know okay so I think we can thank Pasquale