 I wanted to do something a little bit different from the Blackboard talks that we've done so far. Today, I want to talk about indirect detection, so broadly speaking, astrophysical and cosmological signatures of non-gravitational interactions between dark matter and the standard model. So what I want to be by the end of the lecture is that you have some understanding of how we actually compute these signals, what people are looking for, what kinds of different objects we can look for. We can look for these signals in, what kind of channels. And I also want to go through and sort of summarize where we currently stand. And time permitting at the end, I'll talk a little bit about a few existing anomalies in indirect detection, by which I just mean signals that we don't fully understand in terms of background yet, and which could potentially have a dark matter explanation. So for the summary of the constraints, it's easier to look at the actual plots on slides than to watch me sketching cartoons of various constrained plots for the next hour and a half. That said, I know that sometimes on slides, like calculations can go by pretty fast. So if there's something that I put in a slide and you're like, wait, I don't understand that I would like more expansion on that, please feel free to raise your hand and say, hey, can you go through that in a little bit more detail? OK. OK. So in general, what I mean by indirect detection. So we've talked before about other detection mechanisms. You've heard in other lectures about ways to look for axion and axion-like particle. Dark matter, we talked yesterday about the classic direct detection strategy of scattering, of looking for the recalls of standard model particles due to the presence of dark matter particles and talked briefly about collider searches where you try to produce the dark matter and then look for other particles produced in conjunction with them. So indirect detection strategies is to say, let's look for the standard model particles produced from dark matter either directly by trying to observe those particles themselves or by looking at their secondary effects on other observables. So this lets you do a couple of things that are really hard to do in other searches. One is that you can set constraints on the lifetime of the dark matter. From what I've told you so far, we basically know that the dark matter's lifetime is longer than the age of the universe. But it could be significantly longer than the age of the universe. Now, that's obviously not a lifetime that we're going to be able to prove in a collider experiment. If it doesn't decay within the lifetime, the universe is not going to decay within our collider. But if it decays with the lifetime longer than the age of the universe, we could potentially tell that by looking for the decay products. If there's a lot of dark matter out there, some of it, if it's not perfectly stable, some of it is going to decay on a time scale that we can see. In the thermorellic models that we talked about a couple of days ago, annihilations are the most direct probe of the observational probe that's most directly linked to the abundance of dark matter. If you find a signal corresponding to an annihilation cross section that lines up closely with that parametrically one over 100 TV squared number that we pulled out the other day, that would also give you a pretty strong hint as to how the abundance of dark matter was generated. More generally, indirect detection often lets you get at regions of especially high mass dark matter parameter space that can be hard to probe with collider experiments because it's hard to make very high mass particles. And for models which may be difficult to see indirect detection because their signal is spin dependent or otherwise suppressed. So the two processes that people mostly talk about in the context of indirect detection are two-body annihilation, which we talked about previously. So this is just like the interaction that we talked about in the context of the thermorellic freeze-out process, that mechanism for getting the abundance of dark matter. The idea would be the two dark matter particles or a dark matter and an anti-dark matter particle if that exists, collide with each other through some new physics which was what we would like to understand. And the output that we see is two or more standard model particles. People often write down a two to two diagram like this just because if diagrams like this are present, they tend to dominate. As I add more particles to the final state, I get extra powers of the standard model coupling constants which are usually small, which, well, okay, it's small unless it's the strong interaction. And they can also be phase-based suppression factors. But in principle, I mean, this final state could be anything and there are plenty of models in which it's actually three-body final states that dominate. Now those standard model final states could be anything, they could be quarks, they could be leptons, they could be gauge bosons. Because we're talking about astrophysical and cosmological signatures here, from indirect detection, you usually can't directly measure that initial product. Most of the standard model particles decay on time scales that are much shorter than the time scales relevant in astrophysics and cosmology. We're eight and a half kiloparsecs from the galactic center, that's 25,000 light years if a particle's produced by annihilation at the galactic center and it's one of the standard model particles. The chances are very good it will decay before it gets to us. So what we're looking for in these searches is generally the decay products of those standard model particles. So photons, electrons, positrons, protons, anti-protons, neutrinos, anti-neutrinos. And maybe, and occasionally, maybe also a heavier nuclei and anti-nuclei. It's pretty hard to form multiple protons and neutrons and have them coalesce together in an annihilation reaction, but it can, in principle, happen. So there's some signal from that, too. On the other side, you can have three body and higher annihilation processes as well, but unless the number density of dark matter is very high, unless its mass is very low, then the number density of dark matter particles in the halo is usually low enough that three body and higher processes are pretty suppressed compared to two body. And if you do have that extremely light dark matter with a very high occupation number, then you usually don't expect much of an annihilation signal just because dark matter that's light can't be too strongly coupled to the standard model. It has to be very weakly coupled to avoid thermalization. But you can sometimes hope to see decay signals from that very light dark matter. So the constraints that I show you today are mostly gonna be assuming two body annihilation. To a first approximation, if it's three body or higher, you can't see any indirect detection signals today because the densities are so low. There are some special circumstances in which you might see them, but it involves very high, very strong velocity dependencies at low velocities. Okay, so this is one process that people think about in direct detection. As we talked about on Wednesday, we've already talked about the thermorellic scenario, and that suggests a particular benchmark value for this annihilation cross-section. This is not a guarantee, as we talked about in the problem set yesterday, dark matter might be asymmetric. It might have had a particular annihilation rate in the early universe when there was plenty of anti-dark matter around. In the present day, there might be very little anti-dark matter around, and the observed annihilation rate might therefore be much smaller. But still, this early universe calculation gives us at least a benchmark to search for, which is, as we said earlier, parametrically one over the Planck mass times the temperature of matter radiation equality. So parametrically one over 100 TV squared. If you actually do the calculation carefully and put in all the order one factors that we neglected previously, then what you find is that the cross-section that you need to get the right abundance corresponds to about two times 10 to the minus 26 centimeters cubed per second. It has a very weak dependence on the mass. Depending on the mass of the dark matter, for dark matter masses in the range where this mechanism works, from MEV up to about 100 TV, the cross-section you need varies between about two and 2.5 times 10 to the minus 26 centimeters cubed per second. So this is not a guarantee. There are plenty of dark matter models where you won't have a signal like this, but if you do see a signal like this, it would be a pretty big hint that you're looking at a thermorellic. And it just gives us a target of a search for. So I'm gonna show a lot of plots in terms of where are we relative to this cross-section. Now, so if I have, if the annihilation relies on the presence of another particle, whether it's an anti-dark matter or something else in the early universe, then that partner may not still be around today. The signal may be suppressed today. You can have models where the annihilation is naturally velocity dependent, whereas naturally either suppressed or enhanced at small velocity. That will again decouple the annihilation signal today from what it looks like in the early universe. So effects like this can change the relationship between annihilation today and in the early universe, but still this is something to look for. But we can also look for decaying dark matter. As I said earlier, this is basically our only way to probe lifetimes that are significantly longer than the age of the universe. Again, in terms of what the signal looks like, it's extremely similar. The first picture is dark matter decays. This is what we'd like to probe. The dark matter and interaction side of the equation. What's produced is standard model particles. Once those standard model particles are produced, they'll decay through processes we understand. And the eventual signal is going to be some spectrum of photons, neutrinos, protons, and anti protons. The meaningful difference from annihilation in the context of what you look for in terms of indirect detection signals is just that decay processes the rate scales like one power of the dark matter density, whereas for annihilation, it scales like two powers of the dark matter density. So in terms of which are the best places to look for annihilation, it's more important that you look at high density regions. Whereas for decay, you just care about the total amount of dark matter in a system. So large regions with lower density can often be a better target. Okay, so for decay, we don't really have a predictive benchmark, the same way that we do with annihilation. So long as the decay lifetime is significantly longer than the age of the universe is probably not gonna play a big role in setting the abundance of the dark matter. You can do some back of the envelope estimates to just try to understand. Suppose the decay was suppressed. Suppose you take an EFT approach, you say, suppose this decay goes through some higher dimension operator. What would the dimension of my operator need for me to get an interesting lifetime? So this is just like a very toy example, but suppose I had weak scale dark matter, like classic WIMP, and suppose that the decay occurred through some process suppressed by some high scale, which for this example, I'm going to choose to be the gut scale. Suppose that now you can just say, okay, so how would the lifetime scale? Well, if this went through an unsuppressed operator, so like I mentioned for no suppression by a high scale, the lifetime would be very short. The lifetime would be comparable to the other TV scale particles we know about, like the top quark, which decays very quickly. Even for a dimension five operator, you can parametrically estimate the decay lifetime like this, and you get a rate of, again, using these very rough numbers of about 0.1 seconds. So if you're going through a dim five operator for decay, either you need the mass scale of suppression to be significantly higher than the gut scale, or you need a very small coupling, effectively, in order for this to be the dark matter. But once you start thinking about dark matter that could have some dimension six operators that allow it to decay away, that break its perfect stability, then you see that, again, for just these numbers kind of pulled out of the air, for TV scale dark matter and gut scale suppression factor, you get lifetimes around 10 to the 25 seconds, and it turns out that these kinds of lifetimes we can constrain within direct detection. The universe is about 10 to the 17 seconds, this is eight orders of magnitude longer, but we would see the standard model decay products of this if they were there. And then as soon as you go to dimension seven operators, it's totally unobservable, and there's no hope, unless it's suppressed by a much lower scale than the scale. Okay, so these are just like some numbers to keep in mind. Okay, yep, question. Yeah, so right, so this is, all I am literally doing here is saying, suppose I have, yeah, so to get, I mean, this is like a really rough parametric estimate, but the rough parametric estimate here is just that there is some, maybe there's some new physics up at the gut scale, or if there's new physics between us and the gut scale, then it doesn't mediate this kind of decay. So literally all that we're doing here is saying, okay, how many factors of the high scale does the operator have out the front? Or for a dark matter particle to decay away, that's a coupling, square that power, that gives you the dependence on the high scale, the other, assume that the other dimensions, that the other powers that you need, of mass that you need to get the dimensions right come from the mass scale of the dark matter itself. This won't be true for all, I mean, this is just a sketch, but yeah, so it's just a sketch, just to say that there are, it is not that hard in the framework of decay products that are suppressed by some high scale to get rates that we might potentially be able to see, but it's also possible, of course, that the dark matter is perfectly stable or that everything that allows it to decay is suppressed more strongly than this, and that you won't see a signal. So none of this is guaranteed. Okay, so let's start by, so you can categorize indirect detection searches in a lot of different ways. You can think about which targets should I look at. You can think about what, but I wanna start by categorizing based on what kind of particles we're looking at, what we're looking for at the end of the interaction, because that changes pretty dramatically what kind of information you can get out of the search. So first I wanna talk about indirect detection based on neutral products of annihilation and case. So photons and neutrinos. Why does neutral versus charged matter in this case? Well, we live in a galaxy which has pretty significant magnetic fields. We live inside a solar system. The sun has a pretty significant magnetic field. The result of this is that charged particles produced in dark matter annihilation do not travel a straight line path. To us, they can bounce around the galaxy for quite a while, and by the time they arrive at our detectors on earth or in orbit around earth, the direction that they're coming from has been scrambled pretty comprehensively. Like even if they're all coming from one point source in a particular direction, the predicted level of anisotropy is at the less than 1% level if the source is comparable to the closest pulsar, like the closest sources of astrophysical point, charged particles that we know about. So that means that you see a signal in charged particles is very hard to trace back to where it came from. So neutral particles are nice because you can tell where they came from and in particular you can tell if they came from regions where we think there's a high dark matter density which makes it significantly easier to separate signals from background. Okay, so this is what I just said. Photons and neutrinos travel to us in straight lines, although depending on the frequency of the photons, they might get absorbed. We as the universe expands, they're going a redshift. That's true for neutrinos as well. So most of the time what we can get is we don't necessarily know how far away they came from but we can mostly get two-dimensional information on where they came from in theta and phi on the sky. And sometimes we can get 3D information. So, okay, so that's helpful for separating signal from background. So if I wanna ask how strong is my signal from a given source, so let's imagine this. I wanna do a calculation. I tell you that there's this clump of dark matter off in a dwarf galaxy in the Milky Way and I tell you that I've got some dark matter annihilation rate of two times 10 to the minus 26 centimeters cubed per second and I wanna know how many photons do I see at the earth. And you have your favorite dark matter model that tells me how many photons it produces per annihilation. How do I do that calculation? So this depends on the dark matter content of the object that you're looking at which we parameterized by what we call the J factor. So okay, so let's just do the math. So what we're imagining here is we have our observer here at earth. We have a source of dark matter over here which might not be, it might be a point object but it could also be some extended object. Then what I wanna do is I wanna ask how many photons will I get from some volume element within this object and then I can integrate over all the volume elements within the object to get the total signal, okay. So this is my DV. So okay, so and let's say that within this volume element DV, the dark matter density is rho. Okay, so the rate of annihilations per unit volume per unit time is gonna scale like the dark matter number density squared. As we talked about previously in the context of the Boltzmann equation, there's a factor of a half out the front because these are identical particles. So we have a symmetry factor there. So it's a number density squared times the annihilation cross-section and we can rewrite that in terms of the mass density as a half rho squared of MDM squared times sigma V. So this is the number of annihilations per unit volume per unit time, okay. So now we wanna know how many photons are produced per annihilation. So that's a particle physics question. That comes from your particle physics model that tells you what does the dark matter annihilate to, what are its branching ratios in the different standard model final states and how do those particles decay which you compute using Pythia or Hedwig or your favorite showering program. So let's just say that the spectrum of photons produced per annihilation we're gonna just characterize as D and D E. Okay, so then we can multiply these together to get the spectrum produced per unit volume per unit time. So there's a number of photons per energy coming out of this volume element by volume per time that looks like this, okay. So now, so how many of those do we see? Well, that's how many are produced. And let's assume a steady state here. We can assume that it's been radiating for a long time and will continue radiating for a long time. So the fraction, so if the distance between us and this target volume is R, they're photons, they're just gonna spread in all directions. We will ignore absorption for the moment. Suppose we've got a detector here. This detector has an area of A. We'll assume the distances are long enough that our area is pretty small compared to the curvature of the sphere of radius R. So then the fraction of the photons that we see is just the area of our detector divided by four pi R squared. Okay, it's just a geometric factor. Most of the photons don't hit our detector. Okay, so then this gives us the number of photons per energy, because we're at D and D E, per unit time that is incident on a detector of area A and to make it not per volume anymore we've multiplied by the volume in this volume element. Okay, so then we just wanna take this object and integrate along this line of sight. So we wanna integrate along all possible choices of D V. Okay, and that will give us the total signal along this particular line of sight. If we want the signal from the whole object we can then also integrate over a range of possible lines of sight. Okay, so let's work, we can see for this it's gonna be pretty convenient to work in spherical folder coordinates that are centered at us. So R is just the radial coordinate here, theta and phi. So theta is, well I mean we can choose our z-axis pretty arbitrarily. If you're working in our galaxy you often choose the z-axis of your coordinate system to point towards the galactic center just because we expect the dark matter halo to be pretty symmetric around the galactic center so then there's no phi dependence in the signal. But if you wanna look at some other object you could choose your z-axis to point at the center of that object. That's pretty arbitrary. So what we see doing this integral is if we look at the number of observed photons per unit energy, per unit time, per unit area we get a result that looks like this. So where this term here is just, so our volume element is is r squared d omega dr, that r squared factor from the volume element cancels out the one of r squared in this expression. So we just end up with a result that depends on the integral of the dark matter density squared integrated over the radial, integrated over the line of side distance that I'm looking at and integrated over some volume wedge d omega, some solid angle wedge d omega. So we can split this as we did for direct detection when we separated out the astrophysics part and the nucleophysics part and the particle physics part. We can do the same thing here. The cross-section, the spectrum per annihilation, the mass of the dark matter, this gives us our particle physics piece. This integral over the dark matter density squared along the line of sight and over the relevant solid angle is governed by the astrophysical uncertainties on the dark matter density and we call this the J factor. Now there are various conventions in the literature for whether this J factor, whether you put the one over eight pi or one over four pi in the J factor or in the pre-factor of the J factor. So if you're not sure, just check which convention the paper that you're reading is using because there's not a fully consistent convention in the literature on this and we have seen people get set constraints that are wrong by a factor of 10 because they were using one convention because they used inconsistent conventions. The decaying dark matter we can run exactly the same argument again. The only thing that changes is that the dark matter decay rate is set by rho, not by rho squared, scales with one power of the numinancy, not two. And there's no factor of a half out the front because we don't have the symmetry factor for identical particles. And yeah, so we end up with a result that looks like this. This is for decay with a very long lifetime. So the decay rate is just one over tau, you don't need to worry about the density changing at an order one level due to the decay. Yeah? Yeah, that's right. So if you have, so the half factor assumes identical particles in the initial state. So that would be true for like Myranna fermions that can annihilate against each other or real scalars. If you've got dark matter and anti dark matter and they're symmetric, so you have equal quantities of both, then the rate is amount of dark matter times amount of anti dark matter. So in terms of the total dark matter density, if they're symmetric, then it's n over two times n over two. So it's actually n squared over four. Sorry, yeah. So what that means is that you need twice the cross section to get for Dirac dark matter compared to Myranna dark matter to get the same signal. But it's also true for exactly the same reason that the thermorellic cross section for Dirac dark matter is four to five times 10 to the minus 26 centimeters cubed per second instead of two to 2.5 times 10 to the minus 26 centimeters cubed per second. But yeah, that sum. If you have higher numbers of particles in the initial state, similarly your symmetry factor may change. If you have asymmetric dark matter and there's no anti dark matter, then I mean, you should still, this should still be the amount of dark matter times the amount of anti dark matter, but that can be much smaller than rho squared divided by m squared. Now, good question. Okay, so these objects are called J factors. This is sometimes called a J factor and sometimes called a D factor. You'll see these quoted in papers because this allows you to characterize basically how good an object is as a target for dark matter annihilation or decay searches without saying much about the particle physics. There's one extra assumption here which is that this cross section doesn't depend on where you are. That the cross section is independent of spatial position. That's not always true. If the cross section depends on how fast the dark matter is moving, for example, then that can vary over these volume elements. It can vary along the line of sight. It can be different in the galactic center than in the outer halo. And so in that case, your sigma v has to go inside this integral and you can't separate the particle physics and the astrophysics quite so cleanly. But if the cross section is approximately constant at low velocities, which is generic behavior for contact interactions, then you can do the separation. Okay, so how big are these J factors? When you look at objects that we might want to look at. So as we said back in lecture, yeah, question. Let me say something about that on the next slide. So there are basically two cases where you have to worry about. So for the moment, for this assumption, I just assumed the photons are propagating at the speed of light. Well, it actually doesn't matter how fast they're going if it's steady state, but this just assumes that they propagate losslessly in straight lines. So if they get absorbed, you can basically stick an extra absorption factor into this that depends on how far they've gone. So if they have a characteristic length within which they get absorbed of R naught, then you can stick an E to the minus R of R naught factor inside this integral. If you need to take into account redshifting so that their energy is changing as they travel, I will say something about that in a couple of slides. However, it's worth making for a lot of the photons we look at, like for WIMP searches, you often want to look at gamma rays, which are at the sort of GV to 100 GV level comparable to the mass of the dark matter. Gamma rays pass through our galaxy with essentially zero absorption. So it depends strongly what energy range you're looking at, how important those absorption effects are. Okay, so suppose we ask how big are these J factors? So back in lecture one, we talked a bit about the halos of dark matter that surround galaxies. We talked about the profiles of those halos, how in dark matter only simulations we expect them to rise steeply towards the galactic center under this, under a profile that's something like this Navarro-Frank-White profile, but how once you turn on baryons, you may flatten out that profile at small distances. And it's still not really well understood from simulations what we expect this to do for the Milky Way Galaxy, and it's also not super well constrained from observations. We think that if this profile flattens out, it probably doesn't do so until you're within a couple of kiloparsecs of the galactic center, but whether it flattens out at two kiloparsecs or like 0.1 kiloparsecs or 0.01 kiloparsecs makes a big difference to the dark matter density because according to the NFW profile, the density would be scaling like one over R. So let's just assume for the moment that this profile extends all the way in to the center of our galaxy. Then as we said yesterday, you can estimate the amount of dark matter in the local neighborhood of the Earth to be about 0.3 to 0.4 GB per cubic centimeter. If we take those values and we extrapolate into the center of the Milky Way, we find that the region within about a degree of the galactic center on the Milky Way sky has a J factor of 10 to the 22 GB squared per centimeter to the five. Now, that number in a vacuum probably doesn't mean very much, but we can go ahead and calculate how many photons we would expect to see from a sort of classic thermal dark matter annihilation cross-section and 100 GB dark matter particle. So the halo of the Milky Way also contains small sub-clumps of dark matter. We said earlier than in a cold dark matter scenario, you form small clumps of dark matter first and then they snowball together to make big halos. The relic of that is that there are some old clumps of dark matter left in the Milky Way halo. They can attract stars to themselves and they form what are called dwarf satellite galaxies of the Milky Way. So those individual dwarf satellite galaxies have J factors typically between about 10 to the 17 and 10 to the 20 GB squared per centimeter to the five. So what this tells you, if you have a CUSB profile, the galactic center is generally where you expect to see a signal first. The J factor is higher, the signal is larger there, but the backgrounds are also much larger there. So to see a signal that you really believed was dark matter or to set robust constraints, it's often better to look at the dwarf galaxies just because the backgrounds are smaller. But okay, but let's think about the galactic center. Let's not worry about backgrounds for the moment. Let's just ask how many photons we would see. Yeah, okay, good. So the question is what are the best fit measurements of the dark matter profile of the Milky Way coming from? So at large distances out beyond the location of the Earth, you can use rotation curve measurements. The local dark matter density, I think the best measurements come from looking at stars that are moving up and down through the disc of the Milky Way in our relatively local neighborhood like within a few hundred parsecs. That gives you some measure of the, like basically of the surface mass density in our region and then you try to subtract off the baryonic disc and that's where a significant amount of the uncertainties come in. The problem is that as you go closer into the galactic center, the baryons increasingly dominate over the dark matter and so you can do a decent measurement of the total mass density but extrapolating the dark matter density increasingly becomes a matter of subtracting off a large number from a large number and trying to measure a small number and the baryonic density is not super well. So I mean, what people do is say, okay, we're going to try different models for the distribution of the baryons and exactly how much mass there is in the baryons. Then if we put a dark matter model on top of that which has this NFW-like profile, let's fit for the parameters of the profile. People often try also like a generalized NFW profile where the inner slope is allowed to have a value that's somewhat different from one. What you find when you do these calculations is that the best fit for the slope is, the best fit for the slope like over all, over all radii within RS, which is about 20 kiloparsecs is consistent with one but I think the error bar is like 0.8 to 1.4, something like that. In these analyses and it's also pretty, but it does appear to be rising towards the center. I can pull you the actual papers on this because you shouldn't take the numbers, the numbers that I'm quoting and numbers that I remember from a study that I read some years ago. So you shouldn't take them super seriously but I can point you to the references. But most of the constraining power on that number is coming from the slope from like 10 to 20 kiloparsecs from the galactic center. So if it was turning over in the center, you can get some measurements into about five KPC from the center and then within that, there's the best measurement that I know is a study from a few years ago looking at, looking I think at red giant stars in the bulge of the galaxy, which was within about two KPC from the galactic center. And they did an analysis where they tried, they came up with four different models of how the baryonic matter was distributed and how the dark matter was distributed, which they said all looked pretty consistent with the data. So I think that you can do with say, okay, what does that correspond, what do those four dark matter models correspond to? Among those four dark matter models, what's the minimum amount of dark matter and the maximum amount of dark matter in the inner galaxy? But these four models are not necessarily a bracketing set. So it could be larger or smaller than that, but all these four models seem pretty consistent with the data. They basically correspond, the high dark matter extrapolation of those models basically corresponds to extrapolating the NFW profile all the way into the galactic center. The low end and maybe it can probably even be a little bit steeper than out of the minus one, like out of the minus one point two or out of the minus one point three. The low end of those models corresponds to the dark matter density being completely flat within two kiloparsecs of the center. So we just like the uncertainties on this are still pretty large. We think it goes up from the earth but and does not go down, but that's partly a theory bias. But yeah, in terms of actual measurements, it seems like anything between a core flattening off two KPC from the center and staying as a little bit steeper than out of the minus one all the way into the center are pretty consistent with the dynamical measurements. Does that help? So yeah, so anytime you see a constraint from the galactic center, you should ask what people have assumed for the density profile, because especially if you're looking at a very small region around the galactic center, this can make a many orders of magnitude difference in how strong your constraints are. Okay, but let's assume for the moment that we're sort of on the more optimistic end of this, that the density profile right, this is just like an ordinary NFW profile in towards the center. This is not like the maximally optimistic case because it could be a little bit steeper than this but it's fairly optimistic. So we wanna ask what's the rate of photons observed on earth from the galactic center region? So we'll take this J factor and we'll multiply by the cross-section times one over eight pi times the dark matter mass. We'll take a cross-section of three times 10 to the minus 26 centimeters cubed per second which is approximately thermal relics. It's a useful benchmark and use 100 GeV WIMP. Now, if you just plug in these numbers, you get a rate of about 10 to the minus nine photons per square centimeter per second at the earth today. So if I have a detector that's 100 square centimeters and I wait for about a year, I expect to see one photon. Now, obviously we would like to actually see more than one photons. So you need either a bigger detector or a longer time to be able to constrain thermal relics signals from the galactic center. The, now the calibration of our current detectors in space, we have the Fermilat Gamma-ray telescope which is the best instrument for these 100, for sort of 100 GeV Gamma-rays. It has about a square meter of collecting area and at this point the data set is about 10 years. So you can raise these numbers by about three orders of magnitude. So you maybe would expect over the Fermi data set to have two of senior order 1000 photons coming from thermo relic dark matter annihilation in a one degree around the galactic center. Now with Earth-based telescopes you can have, so that's because it's a space telescope. It's hard to have a space telescope that's dramatically bigger than about a square meter. In Gamma-rays. Now with Earth-based Gamma-ray telescopes you can make detectors which we currently have telescopes which are in the ballpark of 10 to the four square meters. Future plans are to go up to 10 to the five and 10 to the six square meters. So you know, then you're cooking then you'll expect to get more photons but those telescopes can't generally see photons that are lower energy than about 100 GeV. Okay, so what about including redshifting? How do we adapt these J factors when the photons come to us over cosmological distances and the redshift can't be ignored? So basically there are two effects here. Now we need to evaluate the source spectrum at an energy that is higher than the energy we observed today because the emitted energy is not the same as the absorbed energy. We can recast the integral in terms of the redshift rather than the radial distance and then the easiest thing to do is to work in co-moving coordinates and basically redo this calculation but taking into account that for a given co-moving volume the physical volume corresponding to that is smaller at earlier redshifts by a factor of one over one plus C cubed. So this is the expression that you get. I'm not gonna go step by step through the derivation but the differences relative to the previous case are basically this one over H of Z times one plus C cubed factor. The H of Z factor is doing the conversion between an integral over time or distance and an integral over redshift. So time and distance are the same for a photon because they go at the speed of light and the one plus C cubed factor is taking into account the expansion of the universe and the dilution of the photons produced at an early time by the expansion of the universe. Okay, so where do we look? So we've just talked about dwarf galaxies in the galactic center. So dwarfs have a low background. They're relatively close to us in cosmological terms. They're inside our galaxy. The galactic center is also nearby. It has probably the largest signal of any region we would expect to look at. It also, unfortunately, there has a high background because there's a lot of stuff going on that can produce photons and neutrinos and other particles at the galactic center. Constraints from this are also very sensitive to what the dark matter density is doing as we just discussed. This matters less if you see a signal. Like if you see a signal, you can measure the spatial distribution of the signal and infer from that what the dark matter density profile would have to be. But for setting constraints, it means that you can get nominally very strong constraints that rely on a very strong assumption about the dark matter density. We can look at the galactic halo. So not just at the center of the galaxy, but all across the galaxy. So that has a large area and is very nearby. We're in it. Again, though, the backgrounds are reasonably complicated. We talked a bit about this in one of the earlier discussion sessions. The high energy cosmic rays interacting with the gas and starlight of our galaxy produce a complicated and structured background with gamma ray energies and lower energies, similarly. The galaxy is full of gas and light and that can give you non-trivial photon backgrounds. We can look at other galaxies and clusters. So these are further away. If we look at galaxy clusters, they have a very large dark matter content. They can also potentially give us redshift information. If we know the signal came from a particular galaxy cluster, we know how far away it is. We know that its energy should be shifted relative to signals that we see today and we can use that as a consistency check on apparent dark matter signals from different sources. So these searches are really sensitive to how many of those small clumps of dark matter you have within the larger halos because you're essentially integrating over the whole halo. So we refer to that as the amount of substructure. We can look for those clumps of dark matter in our own galaxy. There are potentially lots of them. They could prove the small scale structure of dark matter. The problem is if they're too small to attract stars, we don't actually know where to look. So this kind of search is like a blind search for hotspots in photons across the sky. And then the trick is convincing yourself that what you're seeing is dark matter clumps as opposed to point sources of photons coming from, for example, stars. And we can look at just the background radiation of the universe. We see photons streaming to us from all redshifts back to about redshift to thousand. Maybe dark matter annihilation or decay that produces photons, well, it suddenly will contribute to this. Again, the level of the contribution depends pretty strongly on the number of small scale halos that you have. So these are broadly speaking our choices for these neutral particles. So what kind of limits can we set? So in the gamma ray band, so I'm gonna talk about the gamma ray band first. This is the high energy band. So the Fermi gamma ray space telescope looks at photons between about 100 MeV and a TV. Then at higher energies, we have these ground-based telescopes like HESS and Veritas. These are air-turenkov telescopes. The way they work is they look for high-energy photons hitting the atmosphere and producing a shower of Turenkov light. Hawke has, is a water-turenkov telescope array. So it uses water tanks to look for the Turenkov showers. And that covers the energy range from like a few TV up to 100 TV or even a PEV. So in particular, if you're thinking about dark matter models that are at the like TV mass scale or higher that are often pretty hard to get to with an LHC, these experiments can often say something interesting about those models. Okay, so as I said previously, dwarf galaxies are low-background systems. So in some ways they're places where we can get relatively robust constraints on the annihilation rate. The way that you actually do this is you look in the region of the dwarf galaxy, you look for the background and you say, okay, I'm going to put in a model which corresponds to a bump in the gamma ray emission at the location of the dwarf galaxy. I'll fit for the normalization of that bump, energy bin by energy bin or putting in a specified spectrum coming from my dark matter model. And then I will do a likelihood analysis that tells me how large a dark matter signal I'm allowed. So the Fermi collaboration, which is the collaboration of experimentalists who work on the Fermi gamma ray space telescope has presented limits based on this year's, okay, this is actually the 11 year anniversary of launch, sorry, that's an older statement. I was launched in 2008. So they've put out limits based on 45 dwarf galaxies and candidates. Their last analysis on this was in 2017. These are, as far as I can tell, the strongest robust bounds on sub TV dark matter annihilating to standard model particles which produce a lot of photons when they decay. That category is pretty broad. It includes essentially any quark final states. What happens is they're hadronized. So they make pions, they make protons, they make mesons of various kinds. And then those particles decay. It turns out that neutral pions, which are some of the lightest available states and so tend to get made pretty copiously, decay about 99% of the time into gamma rays. So basically any channel that makes quarks, any channel that hadronizes, you're going to get a photon signal and this will be a good way to test for it. So they made their limits. So first the whole Fermi data set is public. Any of you can download it and play with it at any point if you like. They also made the likelihood functions from this analysis publicly available, which energy bin by energy bin, which means that you can put in the gamma ray spectrum from your very favorite dark matter model and get the precise constraint out. So I'm going to show you examples for a couple of different sample standard model final states. But the tools are all available to get the constraint on any final state spectrum that you like. So the high energy telescopes like Veritas and Magic and Hess have also looked at the dwarf galaxies and also set constraints on these channels. And they can do better than Fermi at pretty high energies, low TV and above. At lower energies, it's hard to compete. So this is what the limits look like. So these plots, the y-axis is the cross section, centimeters cubed per second. This red lion is that thermorellic cross section benchmark, which again, this is not compulsory, the true answer could be above or below that, but it's a handy benchmark to look at. And these four plots correspond to annihilation dominantly into four different standard model final states, sort of beak walks, order W bosons, order tau leptons, order muons. Now, in these three cases, the main channel is that they decay hadronically in part. They make pions, which decay making gammas. In this muon state, you decay just making electrons. So there's not that much of a photon signal and you'll see the limits are weaker there. Where this constraint actually comes from is three-body final state. So either you produce mu plus mu minus and also a gamma, or when the muons decay to electrons and positrons, they produce gammas in the final state as well. So it's just final state radiation. So if we look at this example of annihilation of beak walks, this black line is the limit, everything above this limit is excluded. You see that it crosses this thermorellic benchmark line at an energy of about 100 GV. So broadly speaking, based on the stacked analysis of 45 dwarfs, if you annihilate with the thermorellic cross-section to one of these hadron-rich channels and your mass is less than somewhere in the ballpark of 100 GV, depending on the channel, it may be a little bit different than your excluded, because you should have seen a single in the dwarfs already. This is actually, so at high energies, these constraints are coming from the magic telescope, which is high energy gamma-ray experiment. The dashed line up here shows you what you would get with magic alone. So you see that it starts to become important for the constraints and scales above about a TV. Okay, so that's where we stand at the moment, yeah. Sorry, just how do we get the limit on one? On the total cross-section, good. So this is, so just from these plots, you won't get the limit from the total cross-section unless you have a large branching ratio into one of these channels. If your branching ratio is 95% to beak walks, it's probably a pretty good approximation just to take this limit. If you have some, say you're like 10% to beak walks, 10% to double use, 10% to electrons, so on, what you should do is extract the total photon spectrum that you would expect from your model. So on average per annihilation, how many photons do you make? Look up this website. It has the likelihoods energy bin by energy bin, so it tells you in each energy bin how disfavored it is to have a certain number of photons on top of the background. You can essentially, you can then use that likelihood to get the limit on any photon spectrum that you like and any combination of branching ratios that you like. And they have instructions on how to do that in the paper. But just from these plots, I mean, you could sort of guess from this that you see that for annihilations to towels or annihilations to Ws or for annihilations to beak walks, the limits are all roughly similar and it's, again, because they all produce hadrons and those hadrons all produce pions. So the limits are around 100 GV. So approximately any combination of these channels is gonna have roughly the same limit. But if you really care about, is it 100 GV or 150 GV, then you should do the likelihood calculation. But if you're at two TV, you're not gonna have a limit. If you're two TV with a thermorellic cross-section, there's not going to be a limit. If you're at one GV, well, okay. One GV, there's another limit that's relevant. There were some questions up there. So it's not even that, really. I mean, this is pretty independent of the cosmological parameters because we're not getting the dark matter density in the dwarf galaxies from the cosmological dark matter density. We're getting it from looking at the orbits of the stars around the galaxies and using that to infer how much mass there is. Okay, yes, sorry. That does not, that, sorry. I'm trying to remember what they do noted by H naught but that doesn't mean Hubble. That H naught doesn't mean Hubble. I don't remember what it actually stands for in this paper but these are just median and 68% and 95% confidence bounds based on the, just based on statistical fluctuations. I don't remember what H naught stands for in this paper but it's not Hubble. Yes, the question is can we compare these plots in a model-independent way with collider searches? Unfortunately, the answer is no. Yeah, so this is really comparisons between this kind of plot and collider searches or direct detection searches is pretty model-dependent just because the energy scales on which these are happening are extremely different. It's also the, I mean, so the thing where you get closest is the annihilation to quarks and gluons which you might say, okay, maybe we can turn this around and think about the inverse process but yeah, the kinematics are really different. I mean, here you have a completely non-relativistic initial state in collider searches. You're not just turning the diagram around, you're also changing the kinematics. So if there's an energy, if there's a non-trivial energy dependence that can change the answer a lot. If you can have, so I mean, if you really just, yeah, so you can do this to some level and you can say, okay, let's write down a class of effective operators like just effective like four-phome operators that describe two dark matter particles come in and two quarks go out and then ask how those four-phome operators are constrained in a situation like this and in a situation relevant for the LHC and provided that you're in a regime where the effective field theory is valid at the LHC then you can do comparisons at that level but I think, or you can do like a simplified model comparison where you cook up a specific simplified model and ask how it's constrained between these two but like which is more constraining can vary very strongly depending on which operator you take and which simplified model you take so there's not really a model independent way to do the comparison. Okay. It does not seem to wanna move forward. There are definitely, okay. Okay, good. Okay, so I don't know. So this is, okay. So this is the same plots again. This is basically the same plots again from the LHC, from the Fermi-Dwarf analysis. Okay. It happens whether I use the remote or whether I use this. So it, okay. Well, okay. Now, okay, I don't know which way. So you also have, as I mentioned earlier, dwarf constraints from Veritas at higher masses. Again, it's hard to compete with Fermi but once you get up to the multi-TV scale to the 10-TV scale, you have some stronger constraints and some channels from Veritas. You can also add very high energies, get constraints from Hawke, which is this water-turing of telescope I mentioned. So this is now going up to the 100-TV scale. These cross-sections sort of at the level of 10 to the minus 24 centimeter squared per second. So this is not relevant to the thermo-relic benchmark. If you have models where the annihilation in the present day is much higher than thermal, then these constraints may be important to you. You can do the same kind of thing for dark matter decay. This is a nice analysis from a couple of years ago. And basically the bottom line for decaying dark matter is that through a combination of dwarf galaxies, galaxy clusters, the extragalactic gamma ray background and the Mookie-Wehalo, we can set lifetime lower limits of about 10 to the 27 to 10 to the 28 seconds for dark matter in the range between about 10 GV and 10 to the 10 GV for these photon-rich final states. So for anything that decays hadronically, essentially, or decays to particles which subsequently decay hadronically. So this is about 10 orders of magnitude longer than the age of the universe. In these decay plots, what's ruled out as shorter lifetimes, that gives rise to bigger signals, so everything below this red region is ruled out. Once you go down to light dark matter, none of the gamma ray telescopes that I've told you about are very good at seeing photons below a GV. Fermi's the best, but it goes down to a few hundred MEV and then loses sensitivity pretty sharply. Now much below a GV, though your range of standard model particles you can annihilate into is also significantly more constrained. The options you have, you can go to electrons or you can go to photons or you can go to neutrinos are basically your three choices. So that means that to the degree that there's a photon spectrum, if you annihilate directly to photons, you just got a photon line at the dark matter mass. If you produce your photons because you made an electron or positron and then you radiate it off a photon, the spectrum is quite hard. It's quite peaked at high energies. So this means that searches for photon lines or other sharp features in the photon spectrum are a good way to look for these light dark matter candidates. So there are constraints. So we can set limits here. For, so the strongest limits on decay come from studying gamma rays from the Milky Way halo. This is a, so this is just a limit on the total flux from a bunch of different experiments that have looked at photons in the sort of tens of KV up to GV range in the Milky Way halos. These are upper flux bounds. Your signal has to fit under these. This is what the limits look like for decay into photons, decay into electrons, and annihilation into electrons. So for the decay plots, everything within these colored regions and lower is ruled out. The annihilation plots, everything above this region is ruled out. You'll note in the annihilation case, we can set limits that, so there are, let me talk about the decay case first. So in the decay case at these low masses, depending on whether you're annihilating dominantly into photons or dominantly into electrons, your lifetime can be again in that sort, your minimum lifetime can again be in that 10 to the 27 to 10 to the 28 second range that we just talked about for the high mass. If you annihilate primarily into electrons, the limits are quite a bit weaker, but it's still, we can exclude decay into these channels for lifetimes eight or nine orders of magnitude longer than the eight, so seven or eight orders of magnitude longer than the age of the universe, 10 to the 24 to 10 to the 25 seconds. There's actually also a stronger constraint for decay to electrons and annihilation to electrons. There are stronger constraints, which I'll talk about in a little bit, which come from modifications to the early universe. If you inject that many extra electrons into the early universe, you ionize the gas after the release of the CMB. The CMB photons scatter off the extra free electrons, and that leaves an imprint in the cosmic microwave background. And that's this black line on this plot is the limit on the CMB constraints. And these black points on this plot are the CMB constraints on decay to electrons, and everything below them is ruled out. As you go to even lighter dark matter, you start, we have, so we have a little bit of a gap in our frequency coverage between a few 10s of KEV and a GEV. The upper flux limit as I just showed you are the best that we have, but above the GEV, the gamma ray telescopes like Fermi are really good and second flux limits. Below about 10 to 100 KEV, X-ray telescopes like Newstar and Chandra and XMMudon go back to setting really strong limits again. So at the X-ray range, when I'm looking at 10 to 100 KEV, you might say, well, hang on, this is pretty marginal for thermal dark matter. So people often don't look at annihilation constraints in this range. But if you do, the thermo-relic annihilation cross-section is very ruled out. But you do look for decaying dark matter. Stero-mutrinos are a pretty classic example of dark matter that could naturally be in the few KEV to few 10s of KEV range. And stero-mutrinos could decay to produce neutrinos and photons. So the X-ray telescopes can search for this stero-mutrino signal. And they have expressed this limit in terms of the mixing angle between the stero-mutrino and the standard model neutrino, which is down to the 10 to the minus 10, 10 to the minus 12 level. That broadly corresponds to lifetimes again up around the 10 to the 29, 10 to the 32nd range. So again, at this low end, again, we can constrain lifetimes that are more than 10 orders of magnitude longer than the age of the universe. We can look at neutrinos as well. So far I've talked about photon telescopes, but neutrinos are also traveling straight lines. They have, in some ways, simple backgrounds so that they're a nice kind of signal to look for. The difficulty with neutrinos, of course, is that it's hard to see them. Even if you build a very big detector, your sort of effective collecting area can be comparable to a much smaller photon detector. You don't catch every neutrino that passes through ice cube. So these are what the limits look like from super K and ice cube. These are for a bunch of different final states. You can see that pretty much all of these neutrino limits are constraining cross sections that are still well above the thermorellic benchmark, which doesn't mean they're not useful, but you need a cross section significantly higher than thermorellic for this to be important. But they can, in some cases, set the strongest limits for very heavy dark matter up around the 100pV scale. There's one other line on this plot which is kind of interesting. This line called the Hess Galactic Center Bound. You see this goes down into the thermorellic scale, which is this gray band, at masses of about one TV, and it overlaps this Fermi dwarf bound, which is the line I was showing you earlier. She might say, oh, why didn't you talk about this earlier? Well, so this is a galactic center bound, as we were talking about previously. This is a galactic center bound, assuming that the dark matter density keeps rising as one over R right into the galactic center. If you say, instead, that the dark matter density flattens off within about a kiloparsec of the galactic center, this purple line will move up by quite a large factor. It can be a couple of orders of magnitude. Okay, so now I wanna say a little bit. So that's what I wanna say about photons and neutrinos, modular anomalies, which I'll get to at the end, but that's broadly where we stand in terms of constraining photons and neutrinos from dark matter annihilation. Now I wanna say a little bit about the charged products of annihilation and decay. So of course the difficulty with charged products is like everything I've told you so far, we say we look at dwarf galaxies. We look at the galactic center. We look at regions where we think there's high dark matter density and we try to see a signal. You can't do that with charged cosmic rays. The trajectories don't point back towards the sources. So you can still look for signals where the expected background is very small or where there's just a large signal that overwhelms the background. So the expected background is very small in cases where you're looking for anti-nuclei searches where it's more than just a proton or a neutron. So for example, anti-duteron searches have an extremely low expected astrophysical background and are being done by the AMS02 and the upcoming GAPS experiment. Because you can't point back to the original sites, your main observable is the energy spectrum of the particles and we can only really measure them in the neighborhood of the Earth. So with the exception of Voyager which is now out beyond the boundaries of the solar system and has given us a measurement of cosmic rays out in the instella medium. But these signals, these searches all come with significant systematics dealing with that uncertainty of how these charged particles propagate through the galactic magnetic fields. So how do we do these propagation calculations? This is a sketch. Realistically how you do these propagation calculations is you take a program like Galprop or Dragon and you put in your injector and then you propagate it through that program. But this is the basis on which these programs are built. So the basic approximation is to say, our galaxy is a sea of cosmic rays. They're diffusing through the galactic magnetic field and losing energy as they propagate. And if I inject charged particles from dark matter annihilation or decay, it's just another source of cosmic rays. Our strategy for what we care about is the steady state behavior of these or at least as measured at present time, spectra of these cosmic energy spectra of these cosmic rays at the Earth. We do this by solving a diffusion equation which has a form that looks like this. Here, phi is the flux in the cosmic rays. There's a function of energy. And I'll explain on the next slide the different contributions to this equation. This is basically saying the time evolution of the number density of these cosmic rays has a diffusive piece. It describes the spatial diffusion throughout the galaxy. This is typically characterized by an energy diffusion coefficient which is usually assumed to be the same everywhere in the galaxy out to some boundary. The simple approach is just to modify the galaxy as a uniform homogenous cylindrical slab. And then at the edges of that slab, you let the cosmic rays escape freely. The latest versions of these codes do a somewhat more careful version of this. They allow for some anisotropy and inhomogeneity in the diffusion parameters, but it's still more or less, this is how the structure works. This second term in the equation is describing how the particles lose energy as they propagate. So this could be due to scattering off the interstellar radiation field in the gas due to losing energy as they move through the galactic magnetic fields via synchrotron radiation. That, again, is typically assumed to be there's an energy loss rate which is a function of energy and which, depending on the sophistication of your code, may or may not depend on position. And then we have the source term which in the case of dark matter annihilation or decay would look like the dark matter density squared or the dark matter density. In the case of astrophysical sources it would look like the distribution of those sources. And, okay. So you can also add more terms to this equation describing like convection out of the galaxy or fragmentation of nuclei or acceleration of these particles that could occur in supernova remnants, but that sort of, this is the basic setup. So dark matter annihilation or decay you'd expect to be a pretty steady source, not a transient source. Like we think the dark matter's been there in our galaxy for a long time. So if you wanna look at the steady state spectrum you can get an approximate solution to this just by saying that the number density should be time independent. You can, as a first approximation say, let's assume that this number density is gonna end up having a power law like energy dependence. This is not a necessary assumption. This is just to get a simple solution. Then we can write and we can approximate the second derivative of the number density just characterize it as the number density divided by some radial scale squared. So if we just want parametrically to understand how these results are gonna behave then because the number density isn't evolving we have a steady state equation that relates the injection of particles to the current number density to the diffusion coefficient multiplied by one over the spatial length scale squared plus the energy loss coefficient divided by E. This is essentially just dimensional analysis. But then the solution to this equation is just that the final N is approximately, so it's Q divided by this which we can also write as it's the injection times whatever time scale is shorter between the diffusion time scale and the loss time scale. A shorter time scale means that the particle is essentially it means that that dominates the dynamics in this case. These shorter time scale means a larger rate. So, and then you can think about this saying, okay, roughly speaking what the steady state solution is gonna look like is going to depend on whether you're in the diffusion dominated regime where this term is much larger or the loss dominated regime where this term is much larger. In the intermediate regime of course it's more complicated but we can focus on those two scenarios. So basically, and it turns out that in general protons are always in this diffusion dominated regime in our galaxy and electrons and positrons are usually in a loss dominated regime. So they behave fairly differently. So if we look back at this, so we expect the, okay. So in the diffusion dominated regime, we expect the diffusion constant to have some approximately power loss scaling with energy. This is just empirically determined from looking at the spectra of cosmic rays that we see in our galaxy. And we get a number of delta between about 0.3 and an 0.7 for the power law index. So in this case what happens is that the steady state spectra just look like the injection spectra multiplied by e to the minus delta where this is the energy scaling of the diffusion coefficient. That just comes from this tau, that's tau for diffusion is one over the diffusion coefficient. That's where the energy dependence comes from. So whatever spectrum you inject, if I inject a power law, this diffusion will make it slightly, so it's positive number, it will multiply by like e to the minus 0.3. So we reduce the power at high energy. In the loss dominated regime, which is what you expect to hold for high energy positrons and electrons. And you can get this like just by plugging in the numbers for these typical values of these parameters and checking which one is stronger. What you expect is that the dominant cooling mechanism is scattering off the magnetic field and scattering off the background radiation field. And both of these have a cooling rate that scales as e squared where e is the energy of the electron. So the steady state spectra injected from electrons, if it's a power law source spectrum, the steady state spectrum just looks the same except multiplied by one over e. So you can build features in the spectrum even when there's no like particular mass scale in the injection, you can get turnovers in the spectrum just because your diffusion dominated at some energies and loss dominated at other energies. That leads to a change in the slope of the resulting power law, which can look like a bump. So broadly speaking, both diffusion and energy losses make the spectra softer, more power at low energies compared to high energies. If you can take the diffusion equation, I wrote down instead of putting an injected power law, put an injected delta function, what you'll find is that these effects tend to smear out the delta function so that instead of just having a spike at some energy, you have more power extending to lower energies. So a consequence of these effects is you can also get secondary particles in this cosmic ray propagation. As protons pass through the galaxy, they can scatter on the gas, those scatterings can make extra positrons. Those secondary positrons will tend, their source is the softer spectrum of the initial protons. So its secondary particles will tend to always generically have softer spectra than their progenitor particles did. Basically, because they're having to pay two factors of this diffusional loss, one for the diffusional loss of the initial particles from their source, and then again, for their own propagation through the galaxy. Okay, so that is broadly speaking, the ingredients that go into cosmic ray propagation calculations because the approximations of isotropic and homogeneous diffusion are probably not right. The galaxy is not a homogeneous uniform cylinder and because we don't have great measurements of some of these propagation parameters, even within this approximation, there are significant systematics associated with that propagation. So the big leading edge cosmic ray experiment at the moment is called AMS02. It's on the International Space Station and it's measured a wide range of different cosmic ray species. For dark matter searches, the most relevant ones at the moment are the positrons and the antiprotons because we would expect dark matter to reduce antimatter in equal quantities to matter, whereas in the universe as a whole, matter's a lot more abundant. So this is what the measurement of the antiproton spectrum looks like. It's the black points. The colored lines here are a range of theoretical predictions from purely background processes producing the antiprotons and the width of this band reflects some of the systematic uncertainties in the propagation parameters. This is a, I think they have one or two more points now on the high end of this. This shows the fraction of high energy cosmic rays that are positrons. Now, you might remember from what I just said a few minutes ago that I said that secondaries should generally have a softer spectrum so less power at high energies than primaries. This is the ratio of positrons to electrons plus positrons and you can see that it appears that the fraction of positrons is going up at high energies. So that actually suggests that the positrons have a somewhat harder spectrum than the electrons as a whole and that's actually pretty surprising because in standard astrophysical processes we expect most of these positrons to be produced as secondaries from protons interacting with the gas whereas the electrons we expect to be accelerated in supernova remnants as primaries. So this suggests that there's probably some kind of primary source of cosmic ray positrons out there. The question of what that source is has been an open question in the field for the last 11 years or so. So that makes it harder to set limits with the positrons because there's a primary source of positrons out there that we don't understand. That's it. You can still look for features in the spectrum and you can set limits on the electron positron final state that are actually very strong and are the strongest limits that we have on this final state. So this is from a paper from a couple of years ago. Black line is the limit. Green line is the systematic, is the estimate of the systematic uncertainty on the limit. The horizontal line is the estimate of the thermorellic cross-section and you can see it crosses between around about 100 GV. So using AMSO2, we can get similar constraints on the dark matter mass with the thermorellic cross-section for left-on-rich channels that we do using the gamma ray experiments for photo-on-rich channels, albeit modulo these systematic uncertainties. This plot on the left is a constraint on the various channels from annihilation into antiprotons. So this is for annihilation into bequarks. The red line is the bound from the dwarf's oradal galaxies that I told you about earlier. The blue line is the nominal bound without a systematic uncertainty band from AMSO2 from looking at the antiproton measurement. You see the blue line is lower than the red line. So the blue line is, so the cosmic rays here are giving a nominally significantly stronger constraint than the dwarf galaxies and would actually rule out the thermorellic cross-section although several hundred GV are not just 100 GV. So these constraints can potentially be really strong. The question is how much do you believe them? There are also measurements of the cosmic ray by other experiments by Dampi and Calet. So Dampi's a Chinese satellite experiment. Calet is, I believe, on the space station. So this is the total spectrum of electrons plus positrons as a function of energy going up to several TV. There's still, as far as I know, still kind of unresolved. So both of these see that the spectrum turns over around a TV, but there are pretty significant differences in the slope of the spectrum below a TV between these experiments. As far as I know, it's still not totally well understood the discrepancy between these two and which of them is right. So there's just a, there may also be some systematic uncertainties just in the measurements. There are also, as I mentioned before, there are actually dark matter models for which the strongest limits come from Voyager, which recently went out beyond the solar system and the people who launched it back in the 70s were foresightful and put a spectrometer on it capable of measuring low energy cosmic rays. Because it's now out beyond the influence of the sun, it provides a unique measurement of what the cosmic rays are beyond our solar system out in the interstellar medium. And in particular, this allows you to measure cosmic rays that are well below a GV. In AMS, you see the spectrum of low energy cosmic rays drop off pretty sharply and part of that is because the sun deflects low energy cosmic rays. So Voyager actually puts the best limits on 10 MeV to GV dark matter that decays into electrons and positrons. Yeah, these are, these are the limits and this was slightly embarrassing for me because we had written a paper about constraints from the cosmic microwave background on these decays and there's a range of parameter space where we are beaten by Voyager and it's not very often that you get beaten out by a 1970s experiment, but it happened in this case. It's pretty powerful to be able to measure cosmic rays beyond the solar system. So when you put all this together, so there's one thing on the spot that I haven't told you about yet, but when you put all this together, what you find generally is that hadronic decays produce a lot of photons, so the photon searches using the dwarf galaxies in particular are most effective and most robust at constraining most of the standard model final states. The exception of the ones which don't make a lot of photons because they don't decay hadronically, branching ratios into electrons and positrons and muons and antimuons and of course, neutrinos. The electron and muon final states are tested by cosmic ray experiments. The least constrained channel that we have at the moment is annihilation to muons. In that case, you can get a thermorellic cross section with a mass down to about 20 GV. For most of the others, your mass scale needs to be 100 GV or higher if you're at the thermorellic benchmark. Okay, so now I wanna say one other kind of constraint. Everything that I've talked about so far as we said earlier is pretty local. This is looking at cosmic rays in the neighborhood of the Earth, photons coming mostly from other objects within our galaxy, although there are some cluster constraints as well. But if we see dark matter annihilation in decay today, there've probably been occurring over the whole history of the universe. So even if we can't measure them directly, we can't retroactively put a telescope back where the universe was 100,000 years old. We can look at the cosmic microwave background and a Big Bang nuclear synthesis by the light element abundance, and we can use that to probe what dark matter annihilation in decay could have been doing in the early universe. The advantage of this is that it doesn't rely on modeling how cosmic rays propagate through the magnetic fields of our galaxy, and it doesn't rely on understanding what the dark matter density profile is at small distances. So, bounce from Big Bang nuclear synthesis. There's a nice review on this. It's a bit old now from 2010, but I think it's still a reasonable resource. There are updates to this. Litpul and Nsupakur have a nice paper from 2015, which you sort of check some assumptions. This test, so this is a limit on annihilation into various final states from Big Bang nuclear synthesis to sort of 10 GB to TV masses. It's constraining cross sections that are sort of in the 10 to the 25 centimetre squared per second range. So they're not the strongest constraints out there, but they're actually moderately competitive with the constraints that we get from looking at our galaxy. You can also use this to constrain dark matter decaying with a short lifetime. So if the lifetime is between about 0.01 seconds and 10 to the 12 seconds, it can potentially perturb BBM. So the limits that I like better, that are my favorites, are limits from the cosmic dark ages. So this is the epoch after the CMB was emitted. The universe was almost completely neutral during this period, and the CMB was emitted at the start of it. So it free streams to us through this period. And if we change this period at all, in particular, if we change the ionization level of the gas in this period, we get extra scattering sites for the CMB, and that changes the CMB anisotropies. So just to do a quick estimate to understand what we need to do to get this, you can ask what fraction of the energy stored in the dark matter mass do I need to convert into energy to meaningfully change the ionization level of the universe. And the thing that gives you an advantage here is that the ionization potential of hydrogen is much smaller than its mass. So if I were to convert 100% of the mass in hydrogen into ionizing energy, that would be that I would have 10 to the eight times as much energy roughly, as is required to ionize all the hydrogen in the universe, or in other words, if I were to convert the mass, one part in 10 to the eight of the baryonic mass into ionizing energy, I could ionize all the hydrogen in the universe. Now, there's five times as much mass in dark matter as there is in baryonic mass. So if I were to convert one in a billionth of the dark matter mass into energy, that's enough to ionize half the hydrogen in the universe. Now, this would be completely obvious. I mean, this would be like a brick wall for the CMB. If half the hydrogen in the universe was ionized, the universe just wouldn't be transparent to CMB photons and we wouldn't see them today. So that's pretty extreme, but we could do much better than that because the CMB has been measured very well at this point. So actually we can measure changes in the ionization history at the level of one in a thousand hydrogen atoms being ionized during this epoch. So that in turn tells you that we can probably constrain something in the ballpark of one in 10 to the 11 or 10 to the 12 dark matter particles annihilating during this epoch. You guys, okay, so what fraction of dark matter particles should annihilate during this epoch? Well, it frees out an order one fraction of dark matter particles are annihilating per Hubble time, right? So down at redshift, so then if you ask what fraction of dark matter particles annihilates per Hubble time as a function of redshift, you can work it out and it just scales like one power of the redshift. It just scales like one plus Z. Basically the fraction of annihilates per unit time goes like one plus Z cubed, but the Hubble time itself scales approximately as one plus Z squared. So the difference between those is a factor of one plus Z. So if when the temperature of the universe is one EV we could constrain 10 to the 11, one in 10 to the 11 dark matter particles annihilating that's equivalent to constraining scenarios where at temperatures of 10 to the 11 EV or 100 GeV, an order one fraction of the particles were annihilating. So this gives us some hope that we can test thermorellic cross sections. To do this right, you have to be a bit more careful. You have to work out how high energy particles injected by dark matter annihilation and decay actually cool down and lose their energy. But if you do this carefully, what you get is a set of constraints that look like this. This is the Planck collaboration put out last year. Again, this is cross section on the y-axis mass on the x-axis and everything above these lines is excluded. These different lines correspond to different standard model final states. The red and pink lines at the bottom are direct annihilation into electrons and photons. And then all these other colored lines that basically lie on top of each other are every other standard model final state that's not neutrinos. So now you see, okay, so how do these constraints compete? So you see that this crosses the black thermorellic line at a massive about 10 GeV for most of the standard model channels and maybe at like 20 or 30 GeV for electrons and muons. So then you might say, sorry, for electrons, annihilation of electrons and photons. Then you might say, okay, I mean, like this is nice that you can do it, but these constraints are weaker than the dwarf galaxy constraints that I just showed you. They have different systematics, so that's a nice cross check. But usually I couldn't I already constrain all these channels with dwarf galaxies. The thing is, at lower masses, these constraints just keep going. The limit on the cross section is to a very good approximation, just proportional to one over the mass. And these limits continue down to the KeV scale, whereas the gamma-ray telescopes can't see photons much below one GeV. So what this tells us is that probably speaking, if you've got a thermal dark matter candidate anywhere from the KeV scale upwards, if it still has its full thermorellic cross section today, it had better be heavier than about 10 GeV. And for some channels, more like 20 GeV or 50 GeV or 100 GeV. You can also apply those bounds to decaying dark matter, and I showed you the limits from that earlier, or primordial black holes, and basically other ways of injecting energy. We have future upcoming measurements. We hope that measurements of the 21 centimeter lion emission of neutral hydrogen will allow us to get a better handle on the temperature and ionization levels of the universe during the end of this cosmic dark ages period, so around sort of redshift of 10 to 30. That could potentially be really sensitive, especially for light decaying dark matter, can give rise, that can change the temperature of the universe by a large factor at this time, which we currently don't have very good observational handles on, but it would change the strength of this emission line and our absorption line, and so we could possibly get a handle on that. There was actually a first claimed observation last year by Bauman et al on the Edges experiment. If that observation is correct, it's super exciting, and we have to change how we think about cosmology in some important ways. The chances are probably quite good that it's not primordial, or not correct, that it's either a foreground or some kind of experimental systematic. You could also look at distortions of the spectrum of the CMB. It's this perfect thermal black body, but the last time this was well measured was in 1990, and if we did a modern experiment, we could potentially improve the limits by several orders of magnitude. That said, at the moment, the signal predictions are pretty far below the current limits. There are about four to five orders of magnitude below the current limits. Okay, so that's what I wanted to say about everything except the beyond constrained sense of signals section. So what I've taught you so far is basically where we stand with air detection. This is what we know about what we can exclude in terms of dark matter annihilation signals, and broadly speaking, this picture seems to work pretty well. We don't have any signals that at the moment look like really clear smoking guns of dark matter interactions with the standard model. That's it. There are a few things that we don't know how to explain. I'm happy to talk more about this in the discussion session this afternoon, but let me just, over a couple of minutes, just give you sort of a brief look at some of the highlights. So one signal that there's been a lot of interest in is a line in 3.5 KV X-ray photons. This was first discovered back in 2014 in two papers. What they both did was looked at a whole sample of galaxy clusters at different red shifts, and they saw that when you stacked the clusters together in a principled way, let you talk into account the fact that they were at different red shifts, so a real signal from them should be shifted in energy. When they took that red shifting into account, stacked up the signals, they found that there was roughly four sigma evidence for a lion structure, so photons at one energy and not at surrounding energies, at an energy of about three and a half KV. So the simplest dark matter explanation for this is if you have a seven KV sterile neutrino, there's decaying into a neutrino and a photon, and so you see the photon and you miss the neutrino. The current status of this is that that simplest explanation is in some tension with a bunch of limits because this explanation is super predictive. This says it's a dark matter decay signal, it only depends on the total amount of dark matter in systems, which is also what we measure by gravitational effects. So you can look at dwarf galaxies, you can look at regions of the sky, you can look at the halo, you can look at stacked galaxies, and see if you see the signal in other places. And the results are mixed. The results appear to be, you see the signal in some places, you appear to see a signal like this in the center of the Milky Way galaxy, and also in some regions outside the center of the Milky Way galaxy, but there are other observations that you do in regions where you would have expected to see a signal, so listen to these papers, and where you don't really seem to see anything. So then you can ask, okay, well, does that mean that it's not dark matter? Does it something else? Does it mean it's not dark matter decay? It's annihilation, or it's some other dark matter physics process, or does it mean that there are some unaccounted force systematics in these observations? There's a pretty good review of this by Abizajian that he wrote in 2017, so it's a couple of years out of data, it's missing some of the latest studies, but it's a pretty good summary. One way that you could test for this is to say, okay, well, the main astrophysical backgrounds that this could be are some kind of structure of lions coming from atomic transitions, coming from, for example, when you have a highly ionized particle come in, it can treat electrons with other atoms in the area, and then you end up with excited states as they cascade down, you can get a forest of lions around this region. So, and with the experiments that we have today, those narrowly spaced forests of lions could look like just one line at 3.5 kV. So a way to test this is to do an experiment with better energy resolution. And this you can also test specifically for a dark matter signal, because we know how fast dark matter is going in the galactic halo, and so you can look for the Doppler shift, the Doppler broadening, that comes from the velocity of dark matter in the galactic halo, and if you see that, then that's actually pretty strong evidence that what you're looking at is really coming from dark matter. Problem is, it's hard to build things with the energy resolution that you need. The predicted width of the lion from the Doppler broadening is at the level of one part in a thousand. It's hard to build things with one part in a thousand energy resolution, but one possible instrument is MicroX, which my colleague, Tali Figueroa, works on. This is basically, you put a very sensitive spectrometer on a rocket, you shoot the rocket up into space, the spectrometer gets to stare at the galaxy for about five minutes, and then the rocket comes back down. And this at first glance seems like completely insane, right? I mean, you're talking about five minutes, these space-based telescopes are up there for years, but yeah, it turns out that if you don't have to go to the cost of putting something in space, you can actually get something with a pretty big exposure, and a pretty big field of view, and this can potentially have sensitivity. So they're planning to do, they did a test flight earlier this year, which confirmed that, and they confirmed that they could get energy resolution that was in the right ballpark, although not everything worked with the test flight, so they're working on improving that, but the plan at the moment is for their dark matter flight to be in 2021. So, stay tuned. The galactic center, so moving to a completely different energy scale, moving away from the 3.5K v-line, there's the galactic center gamma-ray access. I talked about this quite a bit in one of the earlier discussion sessions, so I won't say as much about it here, but you can read my slides and you can ask me more questions this afternoon, if you like, but broadly speaking, this is the one excess that I know about that statistical significance is not an issue. The other things I'm telling you about are all two or three or four sigma, this is formerly something like 40 sigma. It's an excess of gamma rays coming from the center of our galaxy. It's consistent with the thermorellic cross-section for about 50 GeV dark matter, annihilating hadronically. The difficulty is figuring out whether it's dark matter or whether it's some other background from this heavily background rich region. The most likely background is that this is coming from a population of pulsars who are spinning neutron stars that are meeting gamma rays. If that's true, then upcoming radio telescopes should have some sensitivity to that pulsar population, could potentially tell us that. If they don't find anything, then we should keep an eye on this. And there's a maybe related excess which is another controversy. I showed you those AMS-02 antiproton spectra before and the resulting constraints. There's actually a claim that there's a little bump in the AMS antiproton spectrum at an energy of around 10 to 20 GeV which would correspond to dark matter between about 40 and 130 GeV, annihilating again hadronically with a thermorellic cross-section. So this is broadly consistent with what you would need for the galactic center access, although the aerobars in both cases are pretty large. The controversy here is about statistical significance. The original, so the first paper claimed detection with the significance of 4.5 Sigma. The second paper said, okay, there's a Bayes factor which is like an odds ratio of about 10 to 50, which is not very significant. This is, you should think of this as being like sort of 90 to 98% confidence level. So that's sort of around the two Sigma mark. There are several recent papers. Hollis et al. in 2019 and another paper from the Cuoco et al. authors claim that the excess is robust and maybe 5 Sigma. Boudard et al. put out a paper just recently which said no, if you really take into account all these propagation uncertainties that we talked about earlier in the modeling of the backgrounds, then it's not, the significance is low. The issue here is partly that as well as the uncertainties in the propagation, how significant you think this is also depends strongly on how correlated you think the aerobars between different energy bins are in the results from AMSO2. And the AMSO2 collaboration has refused to release any kind of covariance estimates between these aerobars, but they're clearly not perfectly uncorrelated because they don't fluctuate enough to be perfectly uncorrelated. So there's a very optimistic view of the evidence that says maybe the galactic center excess and this anti proton excess and there's also like a two Sigma excess of gamma rays and a couple of dwarf galaxies. Maybe these are all really dark matter signals and they're all pointing to a universe in which the dark matter is 50 GB, is a 50 GB thermorellic that annihilates hadronically. If that's true, that's fantastic. I mean, we would have discovered our first evidence of non-gravitational interactions between dark matter and the standard model. 50 GB is a great target range for direct detection experiments. It's well within the reach of the LHC. There are possible model structures where we would expect to find confirmation in ground-based experiments within the next few years. But that said, all of these signals except the galactic center excess seem kind of marginal on the statistical significance front and the galactic center excess may, we may find out in two years that it's pulsars because we find 50 radio pulsars that are distributed just the same as the excess. So stay tuned. And yeah, okay. And this is almost suddenly not a dark matter excess. This is what I told you about earlier that the high energy positrons, that there are more high energy positrons than we would expect there to be. And so we think there's some primary source of positrons. That could be dark matter but we now have reasonable evidence that nearby pulsars in our neighborhood are producing TV and higher scale positrons. And so that is probably the primary source of positrons that we are looking at here. Happy to talk more about it in discussion. Okay, so where does indirect detection go in the future? Well, there's a lot of uncertainty in, as we've seen, there's a lot of uncertainty just in the dark matter distribution. Knowing how much dark matter there really is in dwarf galaxies, can we find more dwarf galaxies? Ones that are closer. Can we get a better handle on how the dark matter is distributed towards the galactic center? Those would both be very helpful things to do. But for also some of these searches, we also need a better handle on the backgrounds. The galactic center excess is an enormous signal. And we don't know where that looks very much like what we would expect from dark matter. And our concern is just that there are backgrounds in that region that are not well characterized and that could maybe be replacing it. There are many future missions that will give us more information. In the high-energy gamma rays, the CTA telescope is going to do substantially better than the experiments that I told you about today. In the MEV to GEV gamma ray band is a push in the US to have a new experiment, which I think has been called Amigo and has been called Compera. And I'm not sure what the current name is, but which will try to fill in that gap in the MEV to GEV gamma ray band. There's a gaps experiment, which is going to go try to look for cosmic ray anti-duterons, which would be a very low background search. And if they find anything, it'd be pretty exciting. CMB stage four experiments, 21 centimeter measurements will give us a new window on the early universe. And we can look for hints of dark matter in all of them. Thank you very much. Thank you.