 Dobro. So, we come towards the end. I apologize, yesterday I was perhaps a little bit too quick on some subjects. And the reason why I was a bit too quick is perhaps that these are the furthest away from the direct interest of most people in the audience. But I hope you got the main point, namely that there are different strategies to look for dark matter, particularly for a wide class of dark matter candidates, what I called WIMPs, Weekly Interactive Massive Particles. You may have at least three big classes of search criteria. You can try to produce them at colliders associated with something else. You can try to look for the recoils of nuclei hit by these particles in underground detectors, or the topic I'm going to treat today indirectly. And for the few words I wanted to say about direct detection, the challenge of these experiments, I try to convince you, is not that much in the theory involved in the calculation of the signals that you have once you have a putative signal, a putative candidate. The problem is to try to infer the size, for instance, of the ball that is hitting your billiard ball, just by looking at the recoil of this ball here, and without knowing the velocity or the incoming direction of the other one, and fighting against a radioactive background, and fighting against maybe poorly known foregrounds that you thought should not be there but they are there, sometimes discovering radioactive species in your stable target. So each technique has its own advantages and difficulties, and today I will focus on indirect dark matter detection, in particular WIMP, although I will enlarge a little bit the definition in some cases to highlight some differences that may arise in some signals if, for example, your particle is not annihilating, is rather decaying, and here is the outline. Basically, I cannot cover all potential indirect ways to probe dark matter. I will focus on a few characteristic ones, gamma rays, neutrinos, charge cosmic rays. I will spend a few words also on constraints coming from CME, and I will just try to wrap up on what are the actual perspective to discover something from this strategy, and I will leave out a few interesting indirect probes, like radio waves, X-ray that I briefly mentioned yesterday, maybe as some of you in the question time for searching for alternative dark matter candidates, like could be sterile neutrinos, which are decaying. There might be other constraints coming from energy transfers, for example, in stars, and there are many techniques now being developed to search for peculiar anisotropy patterns, for instance, in gamma rays, but not only gamma rays that might be associated to dark matter. This, I think, is more specialized than the frontline research, but I'd be happy to talk about that in a question. What does it mean to search for dark matter indirectly? It means that you don't detect directly your dark matter particle, rather something this dark matter particle does. In general, through final states in the standard model, produced through annihilations, through decays, or maybe through interaction of this particle in some remote objects. Typically, this indirect dark matter strategy is the closest one to astrophysics and cosmology, and in a sense it's challenging because you don't have your particle directly produced either in your collider or directly interacting in your detector. However, it's also the most natural way you can think of discovering through other probes dark matter for the simple reason that all probes that have given us hints of dark matter are indirect, in a sense, are gravitationally based. In reality, the importance of these probes is that they can give us access to other type of characteristic that cannot be probed directly in the lab, even in principle, so it's sort of necessary program that you have to push through. Now, as I try to insist on you in the previous two lectures, there is no guarantee that there will be any indirect signature associated to dark matter. Most dark matter models predict some sort of associated signal either in collider or in direct detection or in indirect detection, but there are also models that predict none of them, so be ready to take the risk. And again, I'll focus mostly on WIMP and this electro-wick scale paradigm, but you may have plenty of real signals, not only a GVTV scale, but you may have really down to very low energies, and I won't insist further on this caveat. So in our simplified diagram that I drew already yesterday, what indirect detection means, that means that we are trying to look to this direction for WIMPs, namely residual annihilation of particles that are gravitating, for example, in potential wells could be the inner part of our galaxy or the whole head of our galaxy or external galaxies or dwarf satellites of our galaxy into final states which are standard models. Of course, one appealing feature of this paradigm for searching for WIMPs is that in principle, as I described, this kind of process is the same kind of process that happened in the early years, that determines somehow the setting of the abundance that dark matter has, and so you remember this computation of the Y parameter that has to match then the plan to determine a value for omega dark matter. So in general, one of the hopes is that once you measure some signal, you will be able to infer something on the value of this sigma v and, of course, of the mass in such a way that you can try to check if what you are seeing is consistent with an object that formed in the early years by the mechanism we were discussing yesterday. There is some caveat, because this link is only present if sigma v is very weakly dependent on the energy of your particle, more technically, for instance, that the annihilation goes through S wave and you are far away from resonances and there is not big role of co-annihlations and so on and so forth. But in general, if this is not the case, but you are very confident in a specific model, you can still correct, you can still predict somehow the link between the early universe and present day annihilation. And, of course, the signatures do depend on different channels, so it's not equivalent to search for any candidate in gamma rays or neutrinos or charged particles. To some extent you expect that the normalization should not be widely different between different channels, but, again, do not underestimate the creativity of model building. And then there is an important point, is that the rate of these processes depend on the astrophysical distribution of dark matter, because this process can only happen when the concentration of this particle is high enough, which means that you need to know where to look in places where you do expect to have large dark matter densities and this is particularly the case I will show for photons, for gamma rays. What are the peculiarities of gamma rays? Gamma rays, as you know, are neutral, so they retain the directionality, and they come from that direction, which is a good thing because we can hope to use directional information. They are relatively easy to detect, but do not say so to a gamma ray experimentalist that worked ten years or more to build Fermi and launch it, but compared to other probes are relatively easy and there is a lot of background, fortunately for astrophysicists, of course. The flux of objects in the galaxy or in external galaxies that do emit gamma rays, so it's not a clean search. Now you might have seen or if you didn't yet, I'm reporting it here, the flux of gamma rays too bad. The flux of gamma rays, differential flux, this is a flux per energy, per unit surface and solid angle is usually written in the form that I am reporting there, but let me try to describe where the different pieces come from. You have a particle physics component which is just the differential spectrum of, for example, photons produced in one event and then you have to count how many events happen per unit time, per unit volume integrate along the line of sight. Again, it's the usual formula you remember the collisional term that we had something like n1 and 2 sigma v. This describes something like the rate of annihilation per unit volume, per unit time. And here is the same. You have something that is rewritten as the dark matter density over the mass of the dark matter square which is this n square factor. You have sigma v. You have a factor 4 pi that comes from the solid angle because this is normalized to solid angle and then this is per unit volume so you integrate along the line of sight this let's say it's a reference value and here you have some rule over reference value square of your density and then there is a factor that is typically 2 for a self conjugated dark matter like Majorana particles for instance, might be 4 if it's not the case. Why? Because only if you have for example the Dirac particle and the dark matter is not coincident with its own antiparticle what happens is that only one half of your particle are good targets to annihilate with for each dark matter particle only the particles that have opposite charge as you are good targets. So in reality the real density that should enter would be row half square so you get a factor 4 in that case or a factor 2 larger if it's a Majorana particle. Now this you can check this expression and typically people describe it as a product of a particle physics factor and an astrophysical factor now there are some caveats why this is astrophysical because it only depends on how dark matter is distributed and on the direction you are looking at omega here is the solid angle and is in general dependent on where you look at. Now there are some caveats this is only true if sigma v and otherwise it's the same trick we saw yesterday for direct detection you should integrate over the velocity distribution and if prompt emission dominates of course if your gamma rays do not come directly from the event of the annihilation that I was catching before but they come from say energy losses of the other say of these charge particles in the final state the charge particles might propagate a bit and they emit a gamma ray of course the previous equation does not work and we will see how to deal with that later on. Now let me just make some remark on the astrophysical factor unfortunately take the case of our galaxy what we expect for the highest dark matter signal it's that it comes from the inner part of the galaxy but the inner part of the galaxy is actually the region where not only the astrophysical backgrounds are higher but also the baryonic to dark matter ratio is higher so it means that in reality in the inner part of the galaxy although the density of dark matter is expected to be higher the baryons dominate the total potential which means that observationally you cannot very well determine part of your gravitational potential is due to dark matter is almost consistent with zero, with larger bars that means that you have to rely on somehow simulations or theoretical assumptions to know what's the expected profile that dark matter has and so this function rho in the inner part of the galaxy this is just an example some sketch models that have been proposed either fitting simulations in the narrow frame of white it's based on dark matter only simulations and then you have others that have been argued to more closely follow some simulations when you include baryons or sometimes just fitting function for rotation curves and you see that in the inner galaxy if you really go to parsec scale you have uncertainties that are of several orders of magnitude this is part of the problem you cannot really predict where your signal is maximal it's also maximal your uncertainty on your signal this is a different state of affairs for decaying dark matter particles why this so? because the decaying dark matter correspondent of this formula that I'm writing here and I'm reporting here would be just entering the rate the decay rate which is one over the lifetime of your particle replacing the sigma v and here you would have just the density of your of your dark matter particle of your dark matter halo integrated along the lion's side now this is much more constrained first why? because somehow you can immediately realize that something like the integral over the whole volume of rho must give you the total mass of the dark matter in your halo which is somehow related to the quantity that you are seeing here so it's a sort of projected density but you have a constraint which is much more close to it and second since it doesn't enter has a square power it's not enhancing these huge uncertainty in the inner galaxy so depending on the type of dark matter candidate that you are looking at this signal might be more or less uncertain this is another example of the interplay of the particle physics part with the astrophysical part that is of interest so where to look for again I told you you have to look at simulations to have an idea this is just one example not even one of the updated ones but it's a sky mapping galactic coordinates of a typical dark matter in the sky you recognize some features this is a made up intensity so you should think as the normalization of this plot to be almost arbitrary but you see clearly something at the galactic center which is prominent you see some important shadow in a big region in the inner halo then you see some spots here and there spots are either due to substructures in our galaxy like satellites of our galaxy, dwarf galaxies and so on or to external objects like clusters of galaxies far away and then you don't see it but the minimum of this map is not at zero but you have a sort of diffuse glow that comes from the outskirts of the halo in which we are immersed and the extra galactic sky each one of these is a potential target for the search of dark matter through gamma rays in the case and each one of these has specific features so let me describe soon a few of these type of searches and the specificities that they have before that let me spend a few words on the other piece entering the previous equation namely the particle physics factor this spectrum here ok now how reliably you can predict this spectrum it turns out that for astrophysical standards once you have a model you can predict this astrophysical spectrum quite well why this so because if you have dark matter particles going on annihilating at scales which are say hundreds of gv more or less you know the physics of these objects once you specify dark matter in sens that once I tell you dark matter is going to annihilating to gauge bosons I know what's the to have good approximation I know what's the fragmentation and decay processes associated to that this is standard model physics and I can predict how many gamma rays will result as as the final result of this process now you have two qualitatively different signals one is this sort of showering cascading and fragmentation and decay of all the standard model particles which are unstable so they are not photons but they will produce photons eventually and this is a prompt continuum spectrum and you predict with tools which are typically of the typical of the particle physics community like Pythia what I am showing here is some examples of this spectra for different final states you may have heavy quarks you may have gauge bosons and you see that they are relatively similar there are some differences if you go for example to leptonic final states but again the uncertainties on this is relatively modest once you specify your model once you tell me how likely it is your dark matter to tau rather than quarks for instance there are other photons the secondary photons which are associated with the fact that you do produce as endpoints some electrons and positrons these electrons and positrons can lose energy but the energy loss of electron and positrons sometimes turns to be gamma rays so this produces additional stuff and in order to deal with that we will describe the propagation of electron and positrons and we will try to briefly attack this problem later on so it is much more complicated but there is a qualitatively different type of photons that you might expect and these are lines so you may have a dark matter diagram of the sort again I am only using this final diagram as I would say toy pictures but in principle you may think of having something like that this is your dark matter particle and these are photons or you may have something different like these dark matter particles you may have for instance one photon and something else which might be a z boson might be also a higgs boson et cetera these are expected to arise but you should immediately tell me come on it cannot be dark matter is neutral does not couple with visible light so I cannot attach really a photon to dark matter otherwise this would behave like an electron in reality in quantum field theory you have higher order processes so inside this unspecified blob I can have some virtual particles and those virtual particles may be charged and there I can attach photon lines and this is exactly what the sketch there tells you but there is a price to pay that I have to attach more lines which means that here I have extra vertices extra couplings of these particles the one circulating in my unspecified loop so it means that these kind of final states are suppressed are suppressed by powers of the coupling in particular so you should respect a suppression which is of the order alpha square parametrically with respect to the main diagrams of annihilation of dark matter which means that you have to fight a suppression factor which is of the order of 10,000 so you should see 10,000 more photons coming from the processes the continuum that I showed you before than from this one now the good news of this process is that since its energy is equal to the mass related to the mass of my particle by simple algebra these are very sharp features, these are very robust features, if I was sure that I had seen a line, a photon line of say 190GV there is basically nothing in astrophysics that can mimic it unfortunately before I see 10 photons of this sort I should have seen like 100,000 photons of the other type and most likely I would have convinced myself that that was a detection of dark matter of course there are some clever theoretical ideas to try to overcome that, but this is again another proof of no free launch theorem in physics so here I am talking about annihilation but yesterday I touched this point exactly that you need some some protective symmetry because otherwise any standard model that you add to your BSM model typically in some any new particle that you add to whatever scenario of standard model is unstable unless there are reasons that keep it stable so most models just rely on the fact that there is some protective symmetry z2 sort of parity symmetry I was talking to you before one of the appeal of say supersymmetry was that you might relate these to r parity and so somehow to a residual discrete symmetry which might have to do with other empirical facts like the fact that you don't see easily produced these particles at colliders because they should have been produced in pairs the fact that you don't see proton decay et cetera et cetera in principle it's not necessarily decay it might be it might be a different type of symmetry it might be a gauge symmetry you just think of the standard model you may have different particles that are stable because of different reasons the lightest neutrino is just stable because of Lorentz symmetry it's the lightest fermion and basically how can it decay into anything else you may have gauge symmetry preventing decay to happen it might just be accidental because if you have particles which are only coupled very, very, very weakly gravitationally they might not be stable but they decay on very long time scales here I'm talking about annihilation so in principle here if there is a symmetry that is of the type parity this symmetry is preserved because even if this is as parity minus 1 in this this one here you have a final state which has plus 1, here is plus 1 so you are safe you can serve this symmetry but this is an important question on why you can addressing the stability of your dark matter candidate is one of the strongest theoretical constraint in model building now how do you detect this gamma rays there are two main classes of telescope I should say three but let's say two main classes of telescope in space and you launch a tracker and calorimeter that tries to see for example the pair production of a photon in a medium and you see the pairs this is the case of Fermi and then you reconstruct the incoming direction but there you really see the photon induzing your pair or you try to see from the ground with a rays like magic or has and veritas you try to see induced when one such energy photons triggers a shower in the atmosphere starts interacting in the atmosphere produces particle these particles are super luminal in the medium and so they emit Cherenkov light they multiply they redistribute their energy and then the challenge is that you have to separate the photon induced Cherenkov light from the Hadron induced Cherenkov light and this has been I think that has developed I would say in the 80s basically and now it works very well these are very different this can have a very wide field of view but it's limited in collecting area to whatever you can launch in space and usually you cannot launch something that is much bigger than that and or unless you are very rich and well basically it stops at hundreds of gv here it's the other way around the higher in energy the easier it is to trigger a detectable recognizable shower but you have a limited field of view although you have a huge area so for example for large scale studies this is much better than this one for point like or small objects and high energies this might be better than that one and then there is a third technique which is basically more traditional link to cosmic ray detection which are pools now the new generation detector of this sort is is HUC so just to say that you cannot detect anything with everything or you have some limitation what this kind of experiment see is something like that this is the sky seen by Fermi but very old one but enough to recognize that it's quite crowded you see some diffusion mission you see several type of dots on this map with different colors different bees and so on and this doesn't look like the nice simulation map that I was showing you before with a big spot here and a few spots here and there so it means that what you are looking at is mostly astrophysics and then the challenge is to disentangle it and the same is true for the Cherenkov telescope they can narrow down their search to a specific area take the galatic center and even there you see different things you see some different bees you may see point like emissions look at these it looks almost like the dark matter one but then you look at the spectrum it's almost power a power law feature less it doesn't match what you should expect so most likely this is astrophysics and then even if you try to remove these objects you may see some diffuse radiation but clearly this diffuse radiation has nothing to do with dark matter look at the shape of the galatic plane so clearly you have to fight all these astrophysics and that's most of the challenge just to give you some idea of the constraints and the reach of these, please yes it has to be close to us that's the question it comes from the inhalation process right yes yes no and the reason is exactly the same diagram I'm showing here well they will interact if it's a wimp like dark matter but with such a suppressed rate that you can forget about it yes right but this is exactly the same kind of trick that can happen if I ask you what's the probability for an electron in the intergalatic space to interact with a CMB photon the answer is not very low in fact it has a lie probability then over its propagation lifetime it will hit a CMB photon if I ask you what's the probability that a CMB photon will interact with an electron this is a very small number ok so the point is that this is a very rare process when it happens you have gamma rays it doesn't mean that since this happens from time to time then every time you have a photon sitting in dark matter it will interact and ok, just to give you some sensitivity and some searches that have been performed one very powerful search is these little dots that I was showing in the map the satellites of our galaxy dwarf satellites this is a pictorial view of a few of them in the surroundings of our Milky Way and what typically Fermi has been doing but also Cherenkov telescope have been doing this direction for excess photons with respect to diffuse astrophysical expectations and in order to enhance their sensitivity they stuck them they sum somehow all these patches and they see if there is a collective excess and the result is negative and so they can put bounds on the how, what's the rate of annihilation in dark matter this is one reason bound from Fermi so everything that is about this blue curve is this favor if not excluded these are preliminary results these have been some regions preferred by some hints of excesses elsewhere in the galactic center and this is for a specific channel in heavy quark in particular BB bar ok, the only good thing that I want to point out is that these dashed lines is something we should be familiar with by now, namely this is the rough order of magnitude of this wind miracle I was talking about namely if you have a particle of the order of say 30, 40 gv in mass and all the baseline story I told you about what's the ideal toy model non-relativistic rally candidate that fulfills all this condition about matching omega, dark matter etc well, the annihilation cross-section that you would expect is at this level so it means that these experiments are probing exactly the range of electric scale mass these hundreds of gv mass is exactly the same scale of the Higgs say with the kind of cross sections the kind of couplings that are suggested by this wind miracle argument so although there is no firm detection the good news is that we are starting to test the interesting parameter space ok of course the technique can be repeated for sharing of the telescopes here the sensitivity is much lower in terms of cross-section however they can extend wire masses so maybe our picture of how dark matter formed is completely wrong the mass of dark matter may be 5 TeV, 10 TeV and the cross-section being higher we are testing this kind of say failure of our theoretical intuition till now no detection of course it's not the only bright place I was showing you before in this simulation map another bright place if you remember was this sort of extended halo around the galactic center ok now there have been studies for example in Fermi for the radiation coming from bands which are sufficiently far away from the galactic plane because here you see that there is a mess it's plenty of astrophysical objects so somehow you know that dark matter signal should peak here but you say ok I don't know how to describe well my background here it looks like a better place to look at it's a compromised choice they did it and in fact they derived constraints in the same plane cross-section mass which is comparable of course maybe a little bit better according to the channel you look at but why this is different because now this is depending on something completely different from what I showed you before what I showed you before was the stacked collection of these satellites of the Milky Way you have to try to infer the mass of dark matter in each one of these from the rotation curves for example of the few stars that you may see there here this signal comes from the bulk of the Milky Way signal including relatively local part of the sky why because by just geometric effects if I look at 10 degrees above the galactic plane I also cross relatively nearby region so again this confirms the picture that we have sensitivity to this kind of dark matter and if you want to look for the galactic center what did I tell you about small patches of the sky maybe in fact Cherenkov telescopes these ground based telescopes are way more sensitive, they have a higher collecting area in fact the HES telescope for instance did that and they got this kind of exclusion bounds these are the strong exclusion bounds in that energy range but you see what they had to do they had to search their dark matter signal from this green area they had to remove this yellow area they had to compare this green area region with a similar shaped region elsewhere why they did all that because it's a mess they have signals everywhere so they have to preselect a region where you expect your signal to be sufficiently strong so they have to remove objects that are known to exist in the mid there and remove this region which is plenty of astrophysical photons and then in order not to fool themselves they compare the photons that are expected to come from here with photons coming from this funny shaped region why? because there the dark matter signal is maybe a factor of a few smaller but the astrophysical signal is expected to be comparable the unaccounted for astrophysical signal and so they try to get bounds now these are quite strong not as strong as to probe the paradigm I was talking about yesterday and these rely on some strong assumption on extrapolation of the profile in this narrow area but that's the best we can do for this kind of situation so in order to make further progress I'm saying order of magnitude progress probably you need the next generation instruments here of course you can look at something completely different the residual glow that was happening to be everywhere this blue sky background that was in in the map I was showing you and how do you compute it of course look at this formula right if you were to compute now everything that has been emitted from the extragalactic sky all external galaxies and halos et cetera et cetera this integral has to change why? it has to change because now there is a cosmological effect so my line of side integral will become an integral over redshift and I have to account for that there is another effect over cosmological distances there is some finite opacity for gamma rays so I have to account for some exponential suppression which might be possible and then of course the dark matter entering the formula my reference value for dark matter is not anymore the galactic one either from measurements or from simulation but it is the cosmological one and so the formula that you get looks like that it is exactly the same structure of the formula I showed you before but here now you have the dark matter density square, the cosmological value here you have a spectrum which has been emitted at some redshift z so you have to integrate over it here you have the absorption factor these factors come from the scale says 1 plus z to the 6 and then the line element that gives you the suppression times the Hubble function depending on z so it is just a generalization of what I have wrote but the funny thing here is that just like here you had this enhancement due to the quadratic dependence of your signal from the dark matter density here you will depend on this flux multiplier which is nothing but the square of the density contrast of dark matter which is nothing but the power spectrum of the dark matter and this is the way you can compute it unfortunately this power spectrum is not computed on the scales where it is linear and so nicely described our cosmological model you see this is the power spectrum basically in the limit of r going to 0 we get very short distances which is deeply nonlinear so you have to integrate your power spectrum over wave number up to the end of the spectrum and you don't know where the hand is so you don't know how it looks like in the deeply nonlinear regime and you don't know where the hand is so you have to devise some tools or guesswork or whatever to try to extrapolate what you know of the power spectrum on linear regime into the deeply nonlinear regime so the bad point about this is that you have a band of uncertainty for the integrand of this piece which is quite large and depends a bit on how you extrapolate and also on the intrinsic properties of the dark matter because something you probably never see is that this spectrum for dark matter is supposed to end at some point it does not grow forever and where does it end? Exactly like for barions you have some sort of silk damping that depends on the fact that you dissipate the fluctuation on a scale where collisional processes happen if dark matter interacts a little bit even if it's a little bit at some point it will dissipate this fluctuation and so at some point this spectrum will be cut off and this depends on the physics of dark matter this is a curse but also some interesting point because in principle this signal tells you something about collisional properties of dark matter unfortunately this is a very weak dependence it's a sort of logarithmic dependence but it just show you the complementarity of different tools to different aspects again the situation here is that the constraints coming from Fermi are comparable to the constraints that come from dwarfs a bit worse a factor of a few worse and a bit more uncertain on the other hand they extend to very high masses why this so? the reason is exactly due to the fact that you integrate over redshift up to very far away now there are two effects one is that a photon emitted I told you that Fermi can only reach up to say 100 gV, 200 gV energies however a photon emitted at redshift 10 at one TV will have energy of 100 gV so Fermi becomes accessible to much heavier stuff one the second point is that there is this this this suppression which is indicating that there is some opacity of these photons in the extragalactic sky but this opacity is also linked to a possible reprocessing of this energy what does it mean? in the extragalactic sky you have a very high energy photon which from time to time may find for example a photon of the extragalactic background light this is the light emitted by all the stars and the dust of all the galaxies in the universe and then it can make a pair if this process is above the threshold but these electrons and positrons in turn can interact with a photon for example the CMB photons and have an upscutter this photon now may have gV energies for instance and so Fermi is sensitive to stuff that has been absorbed also and this is why Fermi becomes competitive even with sharing of telescopes in sensitivity through this kind of analysis so I'm sorry here I need to be a bit more qualitative because you see there is a huge variety of phenomenology that you can span but not all of them are sensitive to the same things some of them have difficulties but these difficulties can turn into advantages to probe things that you cannot probe with other techniques so this is also what makes this kind of approach quite interesting now let me take a sort of variation of this technique which concerns neutrinos now at phase value neutrinos look like the poorer version of the gamma ray probe why they are neutral for sure they are neutral so in principle you could use them exactly like you use photons unfortunately neutrinos are so weakly interacting that it's very hard to detect them so you may see this is like seeing the same sky I was talking about but with a significantly suppressed sensitivity so forget about them there are two important advantages one is that we know many well less sources of potential astrophysical backgrounds in neutrinos than we know for gamma rays so in a certain sense if we had access to the neutrinos sky it would look we hope more clean and the second thing is that neutrinos do not suffer significant absorptions because they are weakly interacting at least up to some energies now you may see how do I exploit it we will see in a minute so the first thing that we face in dealing with neutrinos is exactly that it looks like a hopeless version of the photon probe that we just described but in reality the fact that it has of course there is a limitation which is due to the fact that the probability of this stuff to interact is very low just to give you ideas the cross-section of the TV energy of these neutrinos is picobarn so these are kind of particle physics cross-section for rare LHC events and even at PV it's below the nanobarn so what's the kind of solution we have to envizage to detect this can we detect can we build a huge man-made detector which is so big to have sufficient statistics in reality what people have been thinking of is to go to natural volumes huge volumes when I tell you huge volumes means really huge this is just in scale the largest neutrino telescope in the world ice cube compared with AFL tower and these are instrumented with charing of detectors but very sparse you can afford to be very sparse if you are aiming at very high energies because the charing of signal might be sufficiently extending on a sufficiently large volume that you detected and then you have to use natural media so just for a comparison this is the largest man-made neutrino detector super kamiokanda and the comparison with current neutrino detectors for astrophysical purpose is clear and you cannot think of scaling this size to this size with man-made detector you have to go to natural media if you are wondering why gigaton scale or kilometer cube size has been detected is not because Francis Alton is aiming at the highest thing he can do is because this is based on estimates on the signal that you should expect if astrophysical objects TV radiation, gamma rays emit comparable number comparable fluxes of neutrinos and this is the size that gives you roughly of the order of say 10 events per year in neutrinos so if you want to start to do neutrino astrophysics at TV and PV energies if the sources of neutrinos are comparable in luminosity to the source of the gamma rays that we see you should build something like that as cubes so neutrino flux of astrophysical origin at roughly at this kind of flux so theories sometimes are clever enough to predict what you should expect and just to give you an idea of the sensitivity to exactly the same kind of search as I mentioned before for gamma rays you look at this sky map you expect some type of signal that I measure so what's the largest possible pattern that I expect consistent with the distribution of the data that I have and this is the kind of plots that you have so everything which is above these curves is excluded again this look very weak the green curves here are the Fermi constraints so you see that for scales of hundreds of GV TV Fermi is like hundreds or thousands times better than neutrino telescopes Are you surprised? I told you that neutrinos interact way less than photons so somehow you expect photons to win but there are cases where neutrinos are interesting which is the high mass and there is another important case where neutrinos are interesting which exactly exploits their weakness they do not interact much I told you so what can we do with neutrinos we can try to see signas of neutrino in neutrino which corresponds to other channels in the absorption regime so even if we are hopeless to see photons coming from deeply opaque regions neutrinos we can use for that purpose and this has been done and in fact one thing that is I think it's a wonderful idea it was developed I think 30 years ago among the first one if not the first one there is this nice article by Presence Pergel that you might know for completely different type of physics so the idea is the following I told you that you might have WIMP interaction in underground detectors and they transfer some little amount of energy or big amount of energy to the target nucleus but I didn't tell you what happens to the remaining WIMP now if the remaining WIMP there is a kinetic energy which is left to the tweet which is less and so a velocity which is less than the escape velocity from the gravitational body it is in, in that case the earth it will start orbiting the earth it will become attached to the earth it's like a small satellite of the earth this WIMP entering the earth by the way because it has interacted underground on some elliptical orbit and it is there because there is no other mechanism that really pumps up its energy so at some point it will interact twice and so it continues losing energy and soon and soon and soon it will sink to the center of the earth and the same happen in the center of the sun now what happens there you start accumulating WIMPs it's like a sort of Swiss bank for WIMPs and it's exactly at the center of the earth I think Switzerland is the only country that for which law if you own some piece of land you own it down to the center of the earth so these things might be related to the accumulation of WIMPs and the point is that this is sensitive to the same kind of physics that I described yesterday namely the elastic interaction of WIMPs with the target it's not exactly the same but what is important is not what happens to the recoiling nucleus it's what happens to the WIMP so the more the WIMP loses the easier it is for it to get captured so you are a bit sensitive to the low velocity end of the distribution of the FV distribution rather than high velocity end which was crucial for the signal to be detectable in underground detectors so just if you want to see a formula a simplified formula there has been a huge systematization of this theory by good in the 90s but just to describe this formula this is an integral of the same quantity the density of the dark matter particles that you may find normalized in this case integrated over mass shells of the sun because depending on the shell at the distance from the center of the sun the WIMP interacts the gravitational potential is different so the probability for it to be bounded to the sun or not is different so you have to sum over them it integrates over some function of the velocity distribution and you get your final result for the capture but basically it goes like sigma times rho which is the same pre-factor that we found in direct detection of the dark matter so it's counter intuitive but you can use this huge telescopes to probe somehow the same kind of physics that these small underground detectors can probe but not by looking at recoil events but looking at a neutrino flux coming from the center of the sun or the center of the earth and this is just a cartoon illustrating the same thing now you may say up to which level WIMP particles continue to annihilate it depends on the loss or the hitting mechanism in principle the number density of your WIMP in the core of the sun assuming it's homogeneous if there was only capture it's just given by the capture rate which scales as the product of the density of the WIMPs times the sigma, the cross-section it's proportional to this quantity however if you accumulate too many of them probability for them to annihilate we start rising how does the probability for annihilating scales is scales with something which is proportional to the square of the number of particles that is present there so at some point if these numbers are not ridiculously small compared to the lifetime of the universe of the solar system and so on you will get a stationary situation where n dot is equal to zero and so CA over C will give you sorry, it's the other way around square root is given by the steady state equilibrium value of your population so what does it mean it means that you can predict the annihilation rate directly from this formula it turns out that the flux of neutrinos as you can see is just given by basically the capture rate so the flux of neutrinos measure the scattering of WIMPs not their annihilation at equilibrium so you can put these constraints on the same level of the constraints of underground detectors and these are the latest result from the ice cube collaboration compared with other detectors like Antares, Baxan etc etc this is for spin dependent and I told you that there are two main ways these particles are expected to interact either with the spin of the nuclei or with the collective nucleus in a coherent way this is the constraints for spin independent cases now again forget about the details the only thing that is important is that here direct and actual experiment stuff that is put in caves underground has a sensitivity to anything that is above these curves these gray curves here so it looks like neutrinos are very bad compared to this which should confirm our intuition that this is a losers channel however in the spin dependent case it's the other way around and why it's the other way around it's easy to think about it the sun is mostly made of protons and protons do have an intrinsic spin so somehow the sun is a big spin dependent detector so the sun wins the fact that there is lots of protons in the sun win compared to the fact that in a small detector underground you have only your outer nucleon outside the shell that can compete so again keep in mind the fact that different techniques different probes make us sensitive to different aspects of the kind of physics we are dealing with which means that you should also know a lot of physics in different branches in order to have a more complete picture of what's going on for now it's relatively easy because we only have constraints but the day you start having an excess here and none there maybe this is telling you something about the specific nature of the dark matter now one thing that is a bit closer to cosmology is the cosmic microwave background now I told you that one of the strongest evidence for the fact that you need dark matter rather than a modification of gravity is the fact that the growth of structure consistent with the pattern that we see in the CMB and what we see nowadays requires these additional species but you can use the acoustic peaks of the CMB and in general the angular power spectrum of the CMB in a completely different way imagine that you have particles that now annihilate after the CMB after the recombination time among the byproducts of the annihilation you may have things like electrons, positrons either directly or through some other mediator for example W's or Z's or heavy quarks or Higgs and so on and so forth at the end since these are all unstable they will produce gamma rays they will produce neutrinos and they will produce also electron positrons now this stuff does something this is energetic particles in a medium that has recently recombined so it's a neutral gas one thing that it can do is to ionize it but if the gas is ionized then the optical depth for the photons is not the same they are way more sensitive to a ionized gas the probability for them to interact is much higher so you might see some signal of dark matter annihilation through the optical depth the tau parameter that has been introduced even in the first day by David Beinberg now again I'm simplifying a bit you should really take into account of all the correlations because you don't measure exactly tau you measure some angular power spectrum that depends on n different parameters but the bulk of the argument is that if you have a knowledge of the tau the photons have experience you know what's the basically the integrated version of the ionized matter that these photons have crossed and so you have indirectly an upper limit to the amount of positrons and electrons that could have been injected after the recombination time and it's slightly more quantitatively how do you compute that again the formula are all variations of the same stuff here you care about the energy deposited as a function of the time you have the same kind of dynamics you have rho square and then there is this parameter that is what CMB physicists prefer to constrain which is sigma v divided by 8 pi m square again the same stuff that we saw again and again times 4 pi because now this is integrated over full solid angle times twice the mass of dark matter why? because this is the energy released by an elation of an event involving two particles of dark matter times some function some number it's dimensionless that tells you how much of this energy is useful to ionized stuff and the whole physics is in this function now how much this function the larger this function is you would expect that this cannot exceed one in reality since it's a function of time it can exceed one for example traces later has done detailed calculations of this function according to the final state your dark matter annihilates into not surprisingly here you see that if dark matter annihilates into electrons and positrons this fraction is almost comparable to one why? because it's all available for this stuff to ionize and the medium not all of it because some of it goes into heating for example of the medium and it's not useful to ionize so you have to take into account that other type of particles are less efficient but again you have roughly between 0.1 and 1 this function is relatively well known roughly at the 10% level you can compute this function and practically what it does the dark matter to the electron fraction in universe is a function of redshift is to change the standard curve which is the the black curve that you see here here there is no ionization due to stars it changes it into these different levels of plateau and the tau function is roughly is essentially proportional to the integral of this in a surprising way but by now you should be used to it the cmb constraints that were announced by Planck in December in to dark matter electromagnetic interaction of dark matter not gravitational or gauge interaction of dark matter and not gravitational ones well it's at the level of the thermal relic once again for particles of the order of tens of gv of proton masses and I won't describe the other points on this curve so once again you may have very different ways to probe this kind of paradigm this is quite indirect so it's very hard that if you see some excess you can interpret it in terms of dark matter but in order to put constraints it's very robust here all structures are linear there is no uncertainty due to the clampiness the particle physics is relatively simple why? because the physics involves the energy losses and positrons of energies down to the kv it's stuff you can study in the lab so this is very robust the point is that here it's a sort of one sided thing the stuff that dark matter can do is only to raise the fraction of ionized stuff now what they do in practice they marginalize over these parameters that are considered for this kind of analysis noisence parameters but again if you are not willing to take them for better than a few 10% these are safe there is another point in reality you are not sensitive to the injection of energy at very low redshift by dark matter it's what happens at redshift you have 500 to 700 that matters so in the picture I was showing you it's really this region that matters there there are no stars there is nothing it's all linear but still technically what they do is to marginalize over the other parameters and I'm simplifying because I'm just flashing now the main parameter that depends on this physics in reality you have to compute the CLs and blah blah blah with an enlarged model and then you just marginalize over all the rest now we come to the most difficult part because I have no idea of what's your knowledge of charge particle propagation so I will do 10 minutes crash course on diffusion loss equation so before I lose all of you let me tell the cartoon version of the story the cartoon version of the story is that if you want to look for dark matter through charged particles which means the electrons, the positrons the anti protons and so on that are emitted you have to deal with a different type of difficulty which is that the galaxy even in a very simplified and theorist picture looks something like that it looks like a polarized halo which very low densities but with significant magnetic field which may be regular or turbulent or so on populated by winds that are triggered by star formation activity there are magnetic inomogeneities which allow charged particles to scatter on them so they propagate in a diffusive way there is convection there might be reacceleration but there are plenty of plasma physics effects so the one formula version of this mess is that the flux now of say anti particles is always the product of some particle physics factor time somethings which is not only a function of the astrophysics but it's a functional of my astrophysical distribution of dark matter so of all these astrophysics going there how do you deal with that the theory for this kind of processes so you have to describe this mess the theory has been established long time ago in the 60s basically we knew that we knew that we should write some kind of equation which is a diffusion loss equation and this is just meant to scare you so there is some source which are unknown there is some diffusion term just special diffusion there is some energy loss term electrons, positrons can lose energy while propagating there is some reacceleration or if you wish there is some diffusion in momentum space that changes the energy of my particles due to scattering within homogeneities in my medium so if you wish these are small scale electric fields trains and electric fields my particles can find there might be a convective velocity in general there is there are adiadatic flow term there is large scale movement of the gas and then there are particle physics or nuclear physics processes like the fragmentation and the decay of my particles and nucleus may interact with gas, may lose one nucleon and the resulting nucleus might become unstable so I have to take into account for all that and just to give you an idea of the link between these flux variable that these equations are usually written in with respect to the phase space distribution it's just the p squared the integral over angle of that by the way all these equation only describes the isotropic part of my problem in principle you have you have to go further on if you want to describe for example the anisotropic part and one quality feature that you might expect is that in a diffusive propagation the flux whatever the injection is tends to be isotropized you might have experienced this every time you open for example a bottle of perfume in a room after a while you smell the perfume everywhere in the room you don't know where it came from so this is a typical diffusive process and you back back easily where it came from so you lose directionality how to deal with that the short answer you run some numerical code the problem is that if you do it blindly sometimes you start violating some hidden assumptions in these numerical codes and then you end up with some strange results and you don't know what to make of them so in order for you to grasp some of the physics and why this is a very difficult search it's a very powerful search in principle but very difficult because it's very indirect I will try to make some toy model some simplification we can solve it and then I will show you how some type of particle fluxes charged particle fluxes depend on some astrophysical parameters how dark matter once depends on some simplified parameters and how this is a very challenging problem to tackle what are the kind of parameters you might be sensitive to so 0th order approximation so the important thing in this messy equation that I wrote is that in reality if you move sufficiently high in energy most of these effects go away are very irrelevant the main effects that matter are the diffusion special diffusion and how it evolves depends on the energy of your particle and the source how many particles of that energy you inject in the medium so under this simplification which on basis of scaling argument you can expect to be good at high energies well you can just neglect everything else but the source term and the diffusion term so all this complicated diffusion equation reduces to this simple simple one where you have a diffusion term and the source term now we are not happy yet so we just parameterize this differential operator with some effective confinement time and once we do that we lose completely information about the space dependence but you may have this idea in mind that the source in medium looks a little bit like a box where from time to time there is a finite probability that my particle touches the border of the box and I lose it so the main physical parameter is what's the confinement time of my particle within the box of course once I do this step I'm assuming that it's completely homogeneous within this sphere now this is like out of the blue I will show you that in fact this is my behavior trust me for a second at the end of the day you can estimate what's the resulting flux say of protons or carbon nuclei or whatever else at the earth just if you knew the injected one the source one which by the way we don't know we have guessed about that but we don't know how do we estimate these parameters one way to estimate which is very, very popular in cosmic ray physics is to use what is called secondary to primary diagnostics now what does it mean it's a complicated way to say that if you look at empirically what you measure in this high energy flux is that the abundances the relative abundances of stuff like say iron et cetera the relative ones are very similar to the relative abundances of carbon and iron et cetera that you find say in the solar system so somehow this cosmic rays seem to originate from a medium which is more or less similar in chemical composition to the solar system but there are some exceptions these exceptions are species like lithium or boron or some more funny named elements which are underabandoned in the solar system you have very little lithium in the solar system but at the end of the day you have like 10% of lithium or few percent of lithium compared to the carbon in the cosmic ray radiation so the standard interpretation is since the pattern of chemical elements in the overall distribution of cosmic rays is very similar to what we measure in the solar system we think that the medium from which they have been accelerated is very similar to the solar system so there is no lithium, there is no boron however since now this whatever happens to this acceleration mechanism has produced non-termal radiation of say carbon of oxygen of iron this non-termal radiation has sufficient energy then once in a while when interacting with gas it can spillate so you have phenomena like carbon plus hydrogen of interstellar medium creating say boron plus x the same could happen to oxygen to iron etc and you can create a lithium etc etc etc and so somehow it's the fact that this is a non-termal radiation that overpopulates these rare elements in the cosmic rays if this hypothesis is correct what does it mean that these objects here are only created during the propagation so their cube is zero their source term of the primary type is zero and the only source term is the steady state flux of this type of elements this is an hypothesis it looks reasonable by the way but let's test it if it is true what happens is that for primary species let's say carbon just to be specific you have some unknown carbon source time some unknown effective time parameter that depends in general on the energy and this is the steady state of the flux that you see at the earth now you see also some boron flux just to be specific what is the boron flux the boron flux is not proportional to cube boron because there is no cube boron it's zero at the source term but there is some boron coming from the fragmentation of this carbon in interstellar medium so it will be proportional by known coefficient to what flux of carbon times the same effective time because we are assuming that they diffuse the same way this is our chart particle once they have the same rigidity they diffuse the same way it's just an electromagnetic phenomenon but flux of carbon goes as q of carbon times the effective time so you get the factor of square times of course there is here some cross section for the probability for carbon to produce boron in a given medium of a given density the density of the gas of interstellar medium once you do the ratio of boron over carbon for instance what do you get? you get rid of the unknown q source term and you get something which is proportional through known prefactors to the effective confinement time and that's the way by comparing the observations to the data you try to determine this poorly known astrophysical or plasma astrophysics parameter which is the effective diffusion time of species in the galaxy this is the spirit how do you try to get out of this messy situation and get some physically meaningful parameters now I should convince you that what I did as approximation is at least consistent with the kind of equations that I wrote and this is the the way to do so so it's the sort of diffusion model for a theorist a lazy theorist so in the galaxy the typical length or the radius of the galaxy is much larger than the thickness of the galaxy so to a first approximation I consider the galaxy like a one-dimensional system where I don't care what happens at the radial border I just care about the Z variable above and below the plane so again I am in this approximation of high energy where I don't care about all these phenomena that only affect low energy so I only have to write the diffusion term and the only important variable is the one perpendicular to the plane so my question my complicated question is reduced to something like that and if the diffusion coefficient does not even depend on Z you get something second derivative of my unknown flux or distribution function with respect to Z is equal to a search term which I can approximate like a delta function because the gas part is much thinner so I can write it as delta Z times this thickness of the gas halo it's the sketch that I am reporting there and then of course you may have losses simply because this nuclei can interact with gas and can die and these losses are parameterized by this second term you have the usual by now you should be used to it the n times again you have the f because this is a loss term for this delta Z and then you have the 2h so that this is correctly normalized this is the question that I can write to simplify my complicated 3 dimensional propagation equation in the high energy regime once I focus on the only relevant variable now when I am at Z different from zero this one is very simple is just second derivative of f equal to zero so I know what's the Z dependence of f the Z dependence of f is just given by sort of I plus Bz and since it must be symmetric I can also write it like that above and below the plane now because I have free escape boundary ok once my particle reaches the boundary I assume that the density of my particle go to zero I can impose that f goes to zero at plus h minus h and this is how I determine the relative weight of the coefficient A and B and that's what I get I get some unknown function of the momentum times a known function the special variables fine then I derive this expression with respect to to Z I evaluate it in zero or if you wish zero plus the same would be true in zero minus and I get another condition now I have only one unknown function of the p f zero p which is the distribution in momentum and the particles in the thin disk but I know how this rescales as a function of Z now this has to obey this condition I can compute that I can compute the f over z I impose this condition and I get my results I get my equation for fp and surprise surprise my results writes exactly the way I told you it should it writes like a source term times and effective time we've recovered this sort of leaky box idea here it's a little bit more subtle because the leaky box is somehow dependent of where I am with respect to the galactic plane it would look like a different time if I am at zero height in the galactic plane or if I am at 1 kPa away or 1 kPa sorry 1 kPa or 2 kPa above or below the plane so the toy model I was describing before is correct in this approximation it's at least coherent with a diffusive approximation where the diffusion coefficient does not depend on the location is homogeneous but might depend on the energy and that's it the effective time now I can even estimate this is the other big advantage with respect to the leaky box model as I was describing before where I had to determine by some other clever idea and now I can estimate it it's just the inverse of the effective confinement time is determined by two timescales one is the collisional timescale is what I call t-sigma tau-sigma there is just the typical interaction time on my particle 1 over sigma vn and I just plug in some reasonable parameters for you to have an idea this is of the order of 10 million years and the other is the diffusion timescales that I normalize to the same kind of timescales to give you the point at which these two phenomena are the same which turns out to be at the level of gv energies now if you do now the same trick that I did before for secondaries you get exactly the results we got before for secondaries because I proved that the solution now is nothing but the source term times the effective time so you take the ratio and you determine the ratio of secondaries to primaries just like the cross-section times the density of the target material which is known stuff times the velocity these are relativistic particles so it must be c which are unknown over d but what I can do is to take my unknown which depends on the energy or the momentum and this one I can fit to the data of say boron over carbon so I have determined these combinational parameters now if dark matter signal through antiprotons or positrons or what else only depended on the same combinational parameters I wouldn't care more I don't care really what's the value of this effective parameter h I'm done there is no uncertainty if you wish unfortunately if you do solve this little exercise you will discover that even in this simplified model once you compute now what's the flux that you expect in the thin plane if I inject my source term not only in the thin plane so I do not put my delta here but I inject it everywhere the dependence is different why this is important for dark matter because that's what dark matter does there is a huge halo and dark matter injects particles a little bit everywhere not only where the matter is and the results of this simple exercise is that you will discover that your flux scales like the source term which you might know because of your theory times the effective time that you have determined empirically but unfortunately it still depends on things that you don't know because here I only know this combination I don't know the other combination right so unless I find some other tools there are bigger uncertainties in the signal calculations than the ones that I can fix by just observing other astrophysical observables and this is the kind of difficulty that you see just to give you an idea look at the kind of uncertainties that you have once you try to determine the limits from antiprotons these are orders of magnitude and the reason why they are orders of magnitude is that we do not know all the details of the astrophysics of this diffusive medium and if you may wonder why we don't have after hundreds of years of progress in astronomy the reason is that this is one of the rare cases where you want to do astronomy without directionality and if you want to know where something comes from without knowing that it keeps its direction you are screwed you are completely blind so the bottom line is directionality is the basis of quantitative astronomy since ancient times so that's the only thing I wanted to say and qualitatively if you now go to positrons and electrons like many people have been working on it's much worse than that, why? because electrons and positrons also lose energy through inverse Compton and through synchronous onto interstellar light and onto magnetic fields and unfortunately unfortunately the time scales for this to happen are shorter than these 10 million years at energies above a few gv so it means that basically you are only sensitive to local stuff and for local stuff all this funny model where you have an infinite plane and slab for the galaxy is bullshit because the galaxy doesn't look like that if you average over 100 parsec or 200 parsec it looks highly inhomogeneous so unfortunately if you want to search for dark matter with electrons and positrons you have to model very detailed way where the sources are, how the medium looks like and what is worse is that some of these sources may be transient and they might be gone you don't see them anymore in gamma rays but cosmic rays take maybe 10 million years to come here so maybe the sources that you are looking for the only good thing is that there are sufficiently numerous sources in the galaxies we know about pulsars, pulsar, wind nebulae supernova, remmans, et cetera if you are brave enough and you try to run a simulation of what the electron flux should look like what the positron flux should look like well, of course you have huge uncertainties there is a sort of galactic variance that you cannot control because there are it's intrinsically a statistical exercise okay but you get that the data and the predictions are roughly comparable to one or other in a reasonable way so we believe that at least in an average way or in a statistical way we know that the high energy sources that we see for example in gamma rays in radio waves, in x-rays are capable of accounting for the fluxes of atoms you can still use it to put bounds on dark matter in a more clever way in the sense that maybe dark matter produces a sharp edge in energy because dark matter only inject electrons and positrons up to its mass so if you look for a sort of triangular shape on the top of a smooth flux maybe you can put bounds and within this approximation you do get some good bounds but unfortunately this is only true for some models is only true for some assumption so it is not very generic okay so just to conclude the question is now can indirect methods really detect dark matter of course as I try to convince you it is very unlikely that you can trust 100% and excess arising in one only of these channels I have illustrated however in different pieces of information coming from different directions that make a coherent picture you may hope to convince yourself that it is very hard to mimic what you are seeing with an astrophysical signal so this is the best hope we have another hope that we have is that at some point direct detection experiment or collider searches find something that looks like being compatible with dark matter if this is the case we know more or less we have a prior or where to look for and there we have hopes that we can narrow down somehow some uncertainty for a specific channel for a specific energy et cetera to such a level that you can see some signal anyway even if you are not very happy about these difficulties and these astrophysical uncertainties you need some indirect detection tool, why that you find in a collider something that looks like missing energy it looks like it might be some massive particle that does not interact it does not interact on which time scales very short time scales it escapes your detector, it's lost how do you know if it's stable on cosmological time how do you know that this is dark matter next time you're here LHC is looking for dark matter he cannot look for dark matter he can look for something that is consistent with being dark matter if whatever it's fine matches other indirect or direct signals so you still need to go back to this kind of blackboard and derive some signals in direct or indirect detection otherwise you will never know that you have discovered dark matter in the lab and that's my summary basically I think that there are reasons to be optimist in the sense that we went a long way since the 80s and we didn't even know if dark matter was baryonic, was non-baryonic maybe ordinary neutrinos could have been whole of the dark matter now we know that we should look for something a bit more exotic I try to illustrate that there is plenty of possibility for this exo-CCC and according to the possibility that you impose first you have always to fulfill some empirical criteria should not be too collisional should not be blah blah blah and according to the context in which you develop your scenario for searching for dark matter you have some possible signature could be direct detection, could be colliders could be indirect detection and then you hope to find converging proofs of something now the good news is that you have many strategies and in terms of sensitivity they are getting to the level where you should have had some chance of discovering dark matter if the most popular scenarios are correct the bad news is that the parameter space is so huge that there is no guarantee that the quest will be successful so this is a good or bad news in the sense that you may work on that for centuries or not now if you are very pessimist remember this kind of problem has risen in the past you already discovered dark matter there was some anomaly in gravity in the movement of Uranus and people just playing with paper discovered that maybe if you had some unseen new body you can account for it then these theories goes to experimentalist friend we will put it in modern terms and say look at there and define some planet and this planet has been discovered because this is a proof because this is a picture now this is just to see that there has always been some politics associated with scientific discoveries this is a cartoon mocking the british french cartoon mocking the british because you had Adams and Leverier working on the same problem and Adams could not find it so he had finally found in the notebook of Leverier the planet or however more open minded you can remember that sometimes you find surprises you don't find what you expect the same problem arises few decades later there is anomalies in the movement of Mercury nobody can account for so ok the trick work once let's assume that there is some dark planet we call it Vulcan and we detect it like several times for decades till somebody cannot confirm the detection and change completely your paradigm in order to understand what was going on so don't be too depressed be patient sometimes discoveries take take very very long time and sometimes you discover something that you don't expect at all and on that note I think I can stop