 The third lecture by Tracy Sliter about the dark matter. OK, welcome back from coffee, everyone, for the last lecture of today. So previously, we've talked about, well, OK, I'll wait for people to come in the doors. Check, people at the back, as you come in, can you hear me? OK, good. Yesterday, there were a lot of good questions, and people have come up to me afterwards to ask other good questions, so keep it up. Asking questions is good. So in this lecture, I'm going to talk about, I'm now going, we've talked about models of dark matter, and we've talked about what we can learn about dark matter from gravitational probes. I'm now going to move on and talk about how we might begin to probe the interactions of dark matter with the known particles. And in particular today, I want to focus on essentially searches that use telescopes, astrophysical and cosmological indirect searches. So just to summarize what we talked about yesterday, we said that from various probes, we know the dark matter is there, that it's fairly cold, that it's collisionless or almost so. We talked about a couple of different classes of models for its properties from WIMPs, which are heavy and weakly interacting, and their relic abundance is determined by thermal freeze out and controlled by their annihilation rate to the standard model. And we talked about axions, which are instead very light. They form a cold low momentum condensate. Their abundance is controlled by this initial condition of misalignment angle as well as their mass, which also controls their couplings to the standard model. So we've seen some of the range of possibilities for dark matter. You'll recall that plot from yesterday where I showed the big cloud diagram with many different theoretical ideas for dark matter. One possible way we might start to distinguish these signatures from each other is if we could see directly the effects of dark matter interactions with the standard model. So right, this time I want to talk about what's called indirect detection, by which we mean searching for the visible byproducts of interactions between dark matter interacting with other dark matter particles or the standard model. Next time I'm going to talk instead about Earth-based searches that aim to observe the interactions between dark matter and the standard model particles directly, which are called direct detection and collider searches. All right. So the two general classes of processes that I'm going to be talking about for most of this lecture are annihilation, and I showed this plot last time, or decay. So schematically, what we mean by annihilation, just to recap, two dark matter particles collide with each other. Something happens. This is what we'd like to find out about the coupling between the dark matter and the standard model. So what we want to learn about is what this interaction is and what's on this side of the equation, the dark sector. That produces standard model particles. These standard model particles, once we get to that step, we know how these particles behave. In general, those particles will decay down and produce long-lived known particles. So in neutrinos, photons, electrons, positrons, protons, and antiprotons. So this is a source. This is potentially a source of the cosmic rays that we've heard about for the two early lectures today. At this schematic level, decay is very similar, although as we saw last time, it can arise from very different physics. Axion's decay, sorry, bad calendar. Let's recover my slides. Okay, now that it's no longer telling me that I'm meant to be meeting someone with someone in America, let's continue. So as we saw last time, Wimps don't decay. Axions do. Wimps annihilate. Axions don't. So this will come from very different fundamental physics to annihilation. But in this schematic picture, it's very similar. We have some novel physics that converts a dark matter particle into standard model particles. When that standard model particle decays, we can look for the photons on neutrinos or positrons or antiprotons from the come out of this process. What is different between these two from this phenomenological perspective is how the signal's going to change with differing dark matter density. In this case, this doesn't rely on any collisions between dark matter particles. If such a signal exists, it's just going to be proportional to the total amount of dark matter you have in any given system. Whereas annihilation is a two-body processor that's going to scale like the dark matter density squared. Okay? Okay. So within this general framework of dark matter particles to decay or annihilate or scatter or interact in some other way and produce standard model particles, there is a huge host of possibilities for what you might see. So there are many different ways to categorize indirect searches. I think most combinations of these different three categorizations have now been explored in one way or another. You might say, okay, we're going to break down searches by are they searching for annihilation? Are they searching for decay? Are they searching for, as we briefly talked about in the first lecture, transitions between states in some dark sector? They're searching for processes that involve more than just two particles, like annihilation, more than one like decay. You could have processes involving three or more. Dark matter particles. You could have processes with like what's sometimes called semi-annihilation where dark matter, where dark sector particles are produced in addition to visible ones. So you could do that. You could sort of categorize it by what's the fundamental physics. You could instead categorize these searches by signature. And from an experimental perspective, this is often what we do because it determines what kind of instrument we can use to look for them. You're looking for photons, neutrinos, the charged particles. These can also include charged heavier objects like anti-duterons. There's another class of effects, which I'll talk more about later, but there are also secondary effects. If you're not looking to see the photons or neutrinos directly, there are still potentially observable effects of dark matter annihilation because dark matter annihilation could provide a source of heating and ionization throughout the history of our cosmos, just as one example it could affect. Dark matter accumulating in stars could affect their structure and evolution. There are again many examples in that category. This slide is more or less just an invitation for you guys to go out and search for, search the archive and search the literature for whichever of these you find most interesting because there is a wealth of papers on all of these searches. If you're looking in particular at phonons and neutrinos, those particles travel in straight lines. So there we can say, okay, we're going to pick a particular target region to look at. And we'll talk about several of these, but we can look at galaxy clusters like the bullet cluster. We talked about before, we can look towards the center of our galaxy where if you believe that there's a Navarro-Frank quite like Cusp at the center, you would expect a very large signal. The Milky Way is surrounded by small dwarf satellite galaxies which appear to be extremely rich in dark matter. So you can look there. You can look at the isotropic background radiation and social signatures of dark matter, annihilation or decay occurring throughout all of cosmic time. So again, if you pick sort of one from this column, one from this column, one from this column, I can almost guarantee you there will be a search of that kind. So, okay, so there are many, many possible searches we could do. How on earth am I going to approach this in an hour? Well, one way is to say, okay, well, what do we really care about? There are many perturbations I can make to the particle physics model that don't really matter for how I choose to do these searches. What I care about from a perspective of someone who has to set up an experiment to look for these things, even though I'm a theorist, I'll imagine myself thinking from that perspective for the moment. I care about what particles are produced by these dark matter annihilations. What do their spectra look like and how would I distinguish those spectra from backgrounds like the cosmic rays that we heard about earlier? What, if I want to know where I should look, then one thing I need to know is how the rate changes with dark matter density. Do I care about looking at places where the dark matter density is especially high and how much is that going to buy me? Does the rate of these probes have any other important dependence that I should know about? Somebody asked a question yesterday about P-wave annihilation, which is velocity suppressed annihilation. They're the annihilation rate scales like V squared rather than being a constant. If this is the case, I'd better be looking at regions where the dark matter is moving pretty quickly. Likewise, if I've got some decay with a long lifetime that isn't longer than the age of the universe, I care about that lifetime as well. If what I'm looking at is the collisions with baryons or with some other species, then I need to know where should I look for the abundance of those other species. So any time you're trying to design a dark matter search in this form, think about these questions. So another way that I could subdivide things I said is between the sort of direct indirect detection searches where I'm looking directly for the particles produced by these interactions or indirect detection like no one unknowns where I'm looking for their effects on the universe of the cosmos. If we're looking directly for the particles, then your major division is, are these particles charged or neutral? As we heard earlier, cosmic rays diffuse in galactic magnetic fields. Dark matter annihilation or decay in this sense is just a source of high energy cosmic rays. And so everything you had in the last couple of lectures is applicable to this signature. So then the key question is, all right, how would we distinguish this from dark matter annihilation? These are charged particles, they don't travel in straight lines. That means it's very hard to tell if did this particular cosmic ray come from, cosmic ray proton come from some supernova along the galactic disk or did it come from a dark matter annihilation at the galactic center? The handle that you do have here is the local spectrum. But for that it's very, so how many cosmic rays are there at each energy? But for that it's very important to understand the backgrounds. On the other hand, if you're dealing with neutral particles, then they don't, then they do travel in straight lines than our galaxy. So we can recover at least 2D spatial information on sources. We can say, all right, this photon appears to be coming from at least somewhere along our line of sight to the galactic center. In some cases, you may be able to get 3D information. So here we have more handles. And I'll show you some examples of this later on. The second category, indirect detection. What we need to understand is, all right, if you've had these processes, then that means it's a ubiquitous source of high energy particles over the whole age of the universe. So what can that do? Well, you heard about nuclear synthesis this morning. There's been work done that shows if you have dark matter annihilation or decay in the very universe, in the very early universe, you're injecting potentially very high energy particles into that new closed synthesis process. That can change the abundances of light elements. You can distort the energy spectrum of the cosmic microwave background. There's perfect, perfect black body that we see at 2.7 Kelvin today. Dark matter annihilation or decay could potentially produce non-thermal photons, and use non-thermal photons as a part of that background. They can, I mentioned earlier that you can have modifications to stellar structure and evolution. There are some fun papers from last year arguing that you can fix some problems with solar physics if you have dark matter interacting in the sun. And I'll talk later about what it can do in terms of ionizing and heating, our intergalactic medium. So after all that, that's sort of the laundry list of everything you can possibly do with indirect detection. It's a long list, and I'm not going to cover it all. What I am going to try to do in this talk is give you a couple of different case studies to show you how these searches generally work, the kinds of techniques that need to be used. So the first one that I want to talk about is, and in particular I want to focus on places where there may be something interesting to see right at the moment. So my first case study or set of case studies is a direct, indirect search where we're looking at photons coming from various regions of the sky. I want to talk about a gamma-ray excess in the galactic center that's gotten a lot of attention as a possible signal of dark matter annihilation for the last few years. Although I will give you a spoiler and give you the conclusion at the beginning, I no longer think that that is likely to be dark matter. And I'll explain to you why that is the case. But it tells you some pretty interesting things about what backgrounds you have to think about. I want to talk a little bit about gamma-ray lion searches, both at high energies and at the 3.5 kV x-ray lion, which has also gotten a lot of attention. Along the way I'll show you where the best current bounds on weak-scale dark matter come from, from dwarf galaxies. Then I want to talk a bit about the other category of search, an indirect search, which allows you to use Planck and the CMB experiments to put constraints on early dark matter annihilation. And I'll talk a little bit about the positron excess observed by Pamela and AMS et cetera. I'm not going to talk about ice cuban trinoids. So we already talked about that a little bit, but it's very interesting. There's a long list of other searches, which I am also not going to talk about just because I don't have the time and they're all kind of colloquial on themselves. But I can hopefully give you a starting point and get you interested in this. Okay, so let me begin by talking about gamma rays. In particular, I'm going to start by talking about the Fermi gamma-ray space telescope, which is my favorite instrument, just in terms of number of papers written on the data. Okay, so Fermi is a gamma-ray telescope. This studies the energy range between about 30 MEV and 100 MEV. It's been up in the sky since about 2008. It scans the full sky every three hours, has an effective area of order, a square meter, energy resolution about five to 10%, and angular resolution between about 0.1 and one degree. The great virtue of Fermi from a theorist's perspective is that they make all their data public, updating on a weekly basis. And so it's possible for people outside the collaboration to go dig straight into the data. If you have an idea for a cool dark matter search to do in gamma rays, you can do it. Now, this is a particularly interesting energy range for that WIMP picture that we were talking about, that we were talking about yesterday. So I'll show you on the next few slides. If WIMPs annihilate, you generically expect a gamma-ray spectrum at somewhat lower energies than their mass. So this is precisely the range in which you would look for such a signal. If we go to higher energies than Fermi, there are also excellent gamma-ray telescopes covering the energy range above 100 GV, up as high as about 100 TV. So for high-scale dark matter, we actually have pretty good options. That's not a stupid question, that's a great question. Yeah, so, right. So preempting my next point, yes. For high energies, you use ground-based experiments. Fermi can see high energy gamma rays. If a high energy gamma-ray hits Fermi, it will see it. I mean, Fermi does see several multi-TV gamma rays. The problem is that Fermi has a collecting area of about a square meter, whereas these ground-based telescopes can have an effective area of 10 to the 5 square meters. So the issue is one of sensitivity. There are not very many 10 TV gamma rays floating around out there. You want as large a collecting area as possible to catch them. It's very hard to put a kilometer-squared telescope in space. So there are advantages to space, but you need to keep the telescopes small. So Fermi is better at low energies. Just because the ground-based telescopes can't go down to low energies, the way that they work is by seeing the Turenkov light. When a cosmic ray or a high energy gamma-ray hits the atmosphere, once you get much below 100 GV, you just can't really use that method. So, yeah, so that's why we needed Fermi to cover this sub-100 GV energy range. But once you can use the Turenkov method, once you get high energy enough to do that, then you can potentially just get much higher statistics by sticking to the ground because you can build bigger telescopes there. Does that answer the question? Okay, so yes, we have the ground-based at Turenkov telescopes whose great advantage over Fermi is that they have an enormously larger, is that they have an enormously larger collecting area. The disadvantages of these telescopes, well, one is they can't go down below 100 GV, but the other disadvantage is that these telescopes at least have fairly small fields of view. So they can only look at several degree patches on the sky at a time, whereas Fermi looks at a third of the sky at any given time and sees the whole sky every two orbits, which is about every hour and a half. So the Hawk is another ground-based gamma-ray telescope that gets away from this problem. It scans 2 thirds of the sky every 24 hours, so it's not quite the full sky coverage of Fermi, but it's much better in that regard than the Hess and Veritas telescopes. But it does have, at energies between below about 10 TV, it is not quite as sensitive as these experiments, but the advantage is once you've got five years of Hawk data, you have five years of Hawk data on essentially every point in the sky. Whereas the Hess or Veritas, you can only look at limited areas. But if you know where you wanna look, then pointing at it with Hess and Veritas will get you the strongest bounce. Okay, so that's the landscape of high-energy gamma-ray searches. Okay, that's what you can do. So now the question is, okay, where do you wanna look with these experiments? For Fermi, that's a question of analysis. You have data from everywhere in the sky. As soon as you know where to look, you can go look at it. For the Hess and Veritas and related telescopes like Magic, it's a question of experiment. Where do you point your telescope at any given time? So there are two locations in the Moki way that people often think of as very as particularly good targets for dark matter searches. These are the dwarf galaxies of the Moki way and the galactic center. And essentially the trade-off between these two is the trade-off between high signal and low background. Dwarf galaxies, so remember we talked on the first day about dark matter structure formation, small clumps of dark matter coalesce into larger halos. Those smaller clumps of dark matter are still present in the larger halos. They're old, they're cold, their densities are high, their velocity dispersions are low. And they are thought to be comprised almost entirely of dark matter. They have very high master light ratios, although they do have to have some stars, otherwise we wouldn't be able to identify them. So the great advantage of dwarf galaxies is that we expect there to be almost no astrophysical background directly associated with them. But the disadvantage of dwarf galaxies is that they're not very big. If the Moki way does have a cusp, a Navarro-Frank white light cusp, as predicted by embody simulations, dark matter only embody simulations, then the galactic center should be much brighter than any of these dwarf galaxies. So if there is a signal to see, you might expect to see it in the galactic center first. Often when people talk about indirect searches, they will talk about a J factor. So J factor just means the astrophysical factors that determine the size of your signal. So for annihilation, for example, it's the integral of the density squared along the line of sight that you're looking along. Anything that we see, any signal that we see on Earth in photons or neutrinos is a projection along a given line of sight. So this is the definition of J. So we're just integrating density squared. The argument of this density squared factor is the distance from the galactic center but recast in terms of the distance of the galactic center to the Earth, which is this symbol. Well, okay, the distance from the galactic center to the sun strictly, close enough. And S here is the distance along the line of sight. Cos L and B here describe the angles between the line of sight to the object and the line of sight to the galactic center, yeah? So S here, right, okay, we're talking about our galaxy here, so everything is at redshift zero to a good approximation and you don't really need to worry about the different cosmological distance measures. You can just, yeah, for the purpose of our galaxy, at least, all distances are the same. Yeah, if you're talking about, if you're talking about more distant objects, then it's not quite as simple as what I'm about to show you. So in our galaxy, you can make the argument that the signal that you see is basically going to be the spectrum produced by dark matter annihilation multiplied by this J factor. Once you're looking at objects of cosmological redshift, the spectrum and the distance from the object are not independent of each other because signals from more distant objects get redshifted on their way to you. So at that point, you can't do this simple separation. But yeah, I guess to do it, I mean, to do it fully correctly, you have to take the spectrum from the distant objects convolved by the appropriate redshift and we'll just work out how much flux do you have coming from that object. Yeah, so yeah, this, so what we can do, right. So the point, so within our galaxy, at least, we can say, all right, what we're going to see is the spectrum of particles produced per dark matter annihilation multiplied by the number of annihilations along our line of sight. You might say, well, okay, but there's also going to be a one over r squared factor because for any given annihilation, you only see some small fraction of the signals like the area of your telescope divided by four pi r squared, where r is the distance to the object. That gets canceled out if we're looking at all sources within some solid angle and the distribution and the density distribution of dark matter is fairly uniform within that solid angle. So this J is basically just a measure of how many dark matter annihilations are there along our line of sight per unit time. For region within a 10 degree by 10 degree box around the galactic center, as an example, this J factor, don't worry about this exact number, I'm just going to use it for comparison, but it comes out to be about 10 to the 22 GV squared per centimeter to the five. Now, for the closest or biggest of the dwarf galaxies on the other hand, J is estimated at about 10 to the 19 to 20 GV squared per centimeter to the five. So if you see a signal from any of the dwarf galaxies, you naively expect there to be a signal about a hundred times bigger at the galactic center. It's just a question of, have you seen it over the backgrounds? Okay, so yeah, and that's just summarizing. We just said, so if you're looking at the galactic center on one hand, you have high sensitivity. If there's a cusp in the dark matter density of all the places in our Milky Way, this is where you would expect to see a signal first. High statistics are great. It means that if you do see a signal, you can study its properties in detail. You may actually have a chance to disentangle it from the backgrounds, and you're going to need that because the backgrounds are also very high in the galactic center. It's one of the brightest places in gamma rays in our whole galaxy. I'll show you some gamma ray maps later on so that you can see that. So we need to be able to use, this is where theoretical models can come in handy. We need to be able to use spectral information or spatial information to disentangle any dark matter signal from the background. In dwarf galaxies, everything is much simpler. The backgrounds should be relatively low. If we saw a dark matter-like signal there, it would be a lot more convincing everything else being equal than seeing it from the galactic center. You might ask, I've been saying, okay, if there's a cusp of the galactic center. We just heard last time this cusp versus core problem, there's some observational evidence for cores in at least in small dwarf galaxies. And this is indeed an issue for the galactic center. The size of the signal that you expect depends pretty strongly on how the dark matter density scales towards the center of the galaxy. And we don't know that a priori. For dwarf galaxies, it's much less important actually because of a deficiency of the experiments. None of the gamma ray experiments that I'm talking about have good enough resolution to resolve the inner regions of the dwarf galaxies where the cusp versus core is very important. Instead, what they're looking at is just the total integrated luminosity from the dwarf. And that doesn't depend very strongly on what's going on at the very center of the dwarf. So the dwarf limits, so any limits from the dwarfs or any signal from the dwarfs are likely to be more robust, but that's not necessarily where you would see it first. Okay. So if we do, so suppose we did wanna go looking for a dark matter signal or we had seen a signal and wanted to understand what it should look like. What should we be looking for? One thing we can look for with photons and neutrinos is what is a spatial shape on the sky? We know the dark matter should not follow the galactic disc. We know from rotation curves that it occupies a large halo instead. So one way that this is often parameterized is, so this is the NFW profile, Navarro-Frank White profile that I showed you yesterday, but with an additional degree of freedom, which is the slope at small radii. So a profile like this approximately describes the rotation curves at large radii, goes like one of R cubed at large radii. At small radii, its slope is just parameterized by this free parameter gamma. We know that the scale radius for the Milky Way, where this behavior starts to change from whatever it is within the radius to the one of R cubed scaling is about 20 kiloparsecs. So yeah, so this is one way to parameterize this. What we really know though, about the dark matter density distribution is that it should be much more extended and much less disc-like than a baryonic distribution, at least in conventional called dark matter models. So that's a potential tool we can use. Another potential tool we can use is to look at the spectrum of the gamma rays or other photons being produced. The difficulty here is that while given any dark matter model I can absolutely predict what this signal should look like over the whole class of possibilities of dark matter models, it varies quite widely. However, what we can say is that if this spectrum is just coming from some fundamental process in the dark sector, then it shouldn't depend on details of where you are. It should depend on the dark matter mass and what the dark matter annihilates or decays into. The dark matter mass typically sets a scale in the spectrum. So this is a plot as an example. If you have 30GV dark matter that was annihilating into various standard model final states, so these are big walks, chump walks, strange walks, light walks, towels, or electron-positron pairs, if it was annihilating into each of these standard model final states, what's the gamma ray spectrum that you should get? And this generally gives you a bump-like spectrum, a very non-power law-like spectrum, with a scale that's just determined by the dark matter mass and the final annihilation channel. So we can look for that. We can try to pick this apart from astrophysical sources that commonly look more like power laws. However, as you saw earlier, it is also possible for astrophysical sources to have features. So there is one particular kind of signature from dark matter annihilation that would be a smoking gun that is very hard to mimic with astrophysics. And that's the spectral line. That's the case where the annihilation channel or the decay channel is just two dark matter particles, annihilate to two photons, or a dark matter particle decays to two photons. Now, in this case, the photons come out at the dark matter mass or half the dark matter mass. If we're talking, especially if we're talking about a weak-scale dark matter particle, there are very few astrophysical backgrounds for 100 GV gamma ray lines. You don't have any atomic lines up there. The difficulty with this signal is that usually, we know that dark matter is neutral. So its direct coupling to photons has to be pretty small. This is, for those of you who've played around with Feynman diagrams a bit, these are just some of the diagrams that contribute to supersymmetric dark matter annihilating into photons. It will have this loop structure. The consequence of this is that usually, if there's an unsuppressed annihilation channel, this signal is typically suppressed by three or more orders of magnitude relative to that. So again, it would be a fantastic signal. In most models, it's not what you expect to see first. Now, there are some exceptions to that general statement. For example, so this is just a little side note, but for example, this is something that I've worked on a bit over the last couple of years. If you have heavy, we know dark matter, which is the superpartner of the W boson. In general, this kind of process is not allowed. So here, kynaut is the Wino, and this is a diagram showing two kynauts coming together, they produce two gamma-ray photons. This is disallowed, as I just said. What you typically expect instead is two kynauts would come together and produce W gauge bosons, which produce a continuum of gamma-rays, which is harder to see. But it turns out that once these winos become heavy enough, then they're coupling to the W boson acts like a long-range force, somewhat like electromagnetism. So that long-range force induces an attractive potential between the heavy dark matter particles. That in turn can raise their annihilation cross-section by quite a lot, bringing it close and in particular can raise their annihilation cross-section into photons. So there is some hope of seeing lion signatures like this. This is an example of constraints on such a lion signature. So this plot on the left shows a constraint provided by the Hess collaboration based on, they looked at the galactic center and they asked, all right, what limit can we set on the cross-section of dark matter particles annihilating into two photons as a function of the dark matter mass? And here they assumed an NFW slope towards the center, so they assumed that the dark matter density was rising steeply towards the center. What they found, so this plot on the right, this blue lion essentially reproduces the limits on this side, it's a 95% confidence lower limit. This red lion is the prediction for the theoretical cross-section for annihilation to gamma, gamma coming from annihilation of heavy, we know dark matter. Now, if you'll recall yesterday we talked about the relic density for thermal whims. We said that the relic density cross-section should be a couple times 10 to the minus 26. That's the centimeter's cube per second. That's the total cross-section, okay? That's everything, that's not just annihilation to gamma, gamma, that's annihilation to W bosons, annihilation to Z bosons, disease and gammas. So in this particular case, this is not really we know dark matter, this is a we know that exists, we haven't forced it to be 100% of the dark matter, but it's 100% of the dark matter around this point. You can see there the cross-section just in this one channel is about 10 to the minus 25 centimeters cube per second. So the other lesson to be taken from this picture, this is another point where the miracle breaks down a little bit because the annihilation cross-section in the present day is actually much higher than it was at freeze out because the long range interaction that I mentioned that's making this cross-section large, it only makes the cross-section large when the dark matter particles are moving slowly, which they are in the present day, but not at freeze out. If you're going to take away anything from these two slides, gamma reliant searches are powerful. They're a very low background search. They are unfortunately not fantastic for all models of dark matter because sometimes their cross-sections are predicted to be very small, but in some models they can be very significant. And lesson two, for thermal wimps, we have an estimate of what cross-section we should be looking for. That estimate is about two to three times 10 to the minus 26 centimeters cube per second. However, there are model dependent effects which can either increase or decrease the cross-section away from that number in the present day. So now let me talk about an actual possibly detected line at a very different scale. In 2014, a couple of groups said that looking... So now I'm going to move away from gamma rays briefly. I'll come back to them shortly. In 2014, several groups found what appeared to be a spectral line of 3.5 kV looking at stacked galaxy clusters. So we're not in the gamma ray regime anymore and this is an energy regime where there are atomic lines floating around. The signal is showing in this upper panel. So the red line, the red data points are what they have after they subtract off their background. The blue data points are what they have after they subtract off both their background and a model for this excess. So this bump here is the excess that we're seeing and you can decide for yourself how convincing you find those data points. But its formal significance is about four sigma significance. And since it was first discovered, there have been many follow-up studies by several groups that claim to see similar excesses from the galactic center. Some cases from a number of different galaxy clusters. There have been searches in Milky Way dwarfs and in stacked galaxies but they have not found much sign of a signal and some of those studies seem to rule out simple dark matter interpretations of this excess. It's also, this excess is probably not an instrumental systematic because at least in a couple of locations it's been seen using multiple X-ray telescopes. And if you want to read them more up on the history and all the searches of these excesses, there is a list of references here and I will be posting these slides tonight. So, okay, so suppose you're a theorist, you're a model builder, but you're an experimentalist and you're presented with a signal like this. Well, what do you do? So the first thing that you, well, and I should mention that this is a case where they did have to take into account your point. They, when they added up the signals from these stacked galaxy clusters, they had to redshift each of the signals accordingly, add them together. And this is actually one of the things that makes it look like this signal is at least something physical, non-instrumental systematic, because when you redshift the results from all the galaxies appropriately and stack them together, you get this four-sigma bump. If you don't redshift them appropriately and just stack them together without that correction, then you get nothing. Whereas if it was just an instrumental systematic, you should know nothing about what redshifts the different galaxy clusters are at. So yeah, that's a very important cross-check that you can do for signals coming from distant clusters. Okay, so the first thing that you might say is, all right, 3.5 KV gamma rays. So this is coming, gamma reliance. So this is coming from either the annihilation of 3.5 KV dark matter particles or the decay of a seven KV dark matter particle. Now, as we talked about yesterday, those are rather close to the limits on warm dark matter, but they don't violate those limits. So that's interesting. It means they could potentially do something interesting for warm dark matter. And in particular, the interpretation that was at least at first very appealing to many people was that this is some decaying seven KV sterile neutrino. It's a neutrino-like particle that does not interact strongly with an on-laptons. And the existing neutrinos don't oscillate into it at any great rate. However, so then you would say, all right, I have my model. I have decaying dark matter. I've measured it in a number of different galaxy clusters. I can estimate how much total dark matter there is in those galaxy clusters. Then I can make a prediction for every other dark matter system in the universe. There should be a signal from that dark matter system that is proportional to its total dark matter content, right? So that's all dark matter decay depends on. Unfortunately, if you do this and then you go look at dwarf galaxies and the outskirts of galaxies, then you do not find a signal at the level that you had predicted. The strongest of these analyses claims to rule out the nominal decay rate that you would need to fit the original data at 12 Sigma, which is pretty high significance. There are also a couple of studies. Another thing that you might do in this situation is to say, well, okay, this signal should trace the overall dark matter density. Where dark matter is denser, it should be brighter. Where there's less dark matter, it should be fainter. So you can do a morphological study on the photons that you already have on the photons that make up this line. This is a little bit difficult to do for in excess. It's only four Sigma in the first place. And anything more subtle than just is it there is inherently going to be less significant than four Sigma. But you can do this calculation and there's a paper out there that looked at this for the Perseus cluster and the galactic center and found that neither of them looked very much like they were tracing a dark matter distribution. So then in this situation, you may do what any groups in the literature didn't say, all right, if it's not dark matter decay, what could it be? Dark matter decay, as you see in this case, is extremely predictive. As soon as you, you know, as soon as if you know the dark matter in one object, you see it there, you can measure the rate. As soon as you've got that rate, you can predict what the signal should be from every other dark matter region, every other region in the universe. So if you instead, so the other roots all essentially function by adding some more model dependence. For example, you could have annihilating dark matter, then the signal depends on the density squared, not the total dark matter density, which is much less well constrained. You could have dark matter having being a heavier particle that has some excited state split by about 3.5 KV from the ground state and when it de-excites, it produces a photon. So like the Lyman alpha line, hydrogen. Another possibility, we talked about axions yesterday. Well, an axion in a magnetic field can convert into a photon. If some process was producing axion-like particles, not axions that dark matter themselves, but axion-like particles at around a mass scale of 3.5 KV, then those particles could convert into extra photons in the presence of magnetic fields. That means that, so now your signal behaves like the density of whatever is producing these particles, which could be the dark matter, multiplied by the magnetic field. So I think these are all still open possibilities for this line. None of them have been excluded, but by their nature, they're somewhat less predictive than the simple decay explanation. So what are the backgrounds and how might we go after this? So there's an ongoing controversy about this signal because it turns out that, well, if you have a spectral line at 100 GV, it's very hard to imitate. If you have a spectral line at a few KV, it's much easier to mimic because there are no in X-ray lines from potassium and other elements close in energy to 3.5 KV. Now, the discoveries of the 3.5 KV line have made the argument that it's very hard to make these lines strong enough to absorb and fully explain the signal. But their strength does depend sensitively on the plasma temperature and this is an ongoing argument in the literature. Now, this is a point of sadness. The hope was that we would know the solution, that we would know the truth about this line this year because the Astro-H or Hitomi experiment launched earlier this year and would have been able to do a fantastic measurement of the width of this line. In fact, it would have been so good that Hitomi should not have just been able to tell whether the line was coincident with any of the no anatomic lines. It should also have been able to measure the broadening of the line due to the velocity of the dark matter particles, the Doppler shift from the velocity of the dark matter particles if it was really a dark matter to K signal. Unfortunately to many people's great disappointment Astro-H was lost shortly after launch. They did take some initial data on the Perseus cluster and so we may still actually find out something about this line just from the preliminary Astro-H data but we will not have what is hoped. There are some other experiments which may be able to provide tests of this. Essentially what you need to provide a test of this line is either high statistics or very good energy resolution so that you can distinguish it from the backgrounds. Okay, now before I stop talking about lines, I'm going to go back and talk about, so I'm about to stop talking about lines and go back to talking about gamma rays but does anyone have questions about this excess about any of the general statements I've made about indirect detection about anything else related? Is that a question? Great, yeah, so it's, right. So the question was the small amount of data from Hitomi that was taken after it launched and before it failed has it been published anywhere? So they showed some preliminary data, very preliminary data at a meeting in the US. When I say recently, I mean, this was a couple of months ago like this whole saga was a couple of months ago. So they haven't really had time to process the data yet. So the bulk of the data is not public and has not been talked about. I would expect to start seeing it in conferences this summer or later this year. So it's something to keep an eye out for if you are traveling to conferences. They did spend a fair bit of that initial time focusing on Perseus, which is where the lion signal appears to be brightest. It actually appears to be so bright that it's hard to, this is one of the problems with the dark meta decay interpretation that it appears to be about five times brighter in Perseus than you would have expected based on everything else. So if it's really there and it really is bright in Perseus and that wasn't just a statistical fluctuation in the data sets that we have, then they might be able to say something. In particular, if the lion is sitting right on top of one of the no anatomic lines, once you have better energy resolution, I think they might be able to say something about that. If it really is a dark matter signal, it's probably going to be hard to get an extremely convincing detection. So I think the prospects are better for them being able to rule it out than being able to rule it in because the burden of evidence for the ladder is much higher. Yeah, okay, so the question is, can you distinguish between decay and annihilation explanations for this line? In principle, yes, you just, well, and in fact, I mean, we may already be distinguishing between decay and annihilation for this line. Again, so the important difference between decay and annihilation here is what is the density? Is what is the density dependence? Okay, does it look like, does the signal look like it scales like density squared? Or does it look like it scales just like density? Now, it's hard to answer the question, does it scale exactly like density squared? Because we don't know the small scale dark matter density distribution in very much detail. Asking, does it just scale like overall density is an easier question to answer because overall density within some region is also known as total mass within that region, which is also known as what gravity cares about. So from orbits, we can usually estimate pretty well the total dark matter enclosed in a region if it's large enough. So, yeah, I mean, it's much easier to say it doesn't look like decay than it definitely looks like annihilation. But, and if you take seriously these constraints from dwarfs and clusters, we already have some evidence that it doesn't look like decay. And so maybe it looks like annihilation, maybe it looks like something else. Any other questions? Yeah, I see one there. For the winner, this one? Yes. Right, yes, so, again, what's, so, yeah, so, right. So the question is, from your slides yesterday, you said the generic behavior is that the cross-sections should scale as one over the mass squared. This cross-section does not appear to be scaling with one over the mass squared at all. It has this funny resonance structure instead. Yeah, so if I were to plot the cross-section at high velocities and the particle was still moving pretty rapidly, then it would have, then it would decline with mass squared. Now, at least the tree-level cross-section would. But what's going on here? Went through this quickly, but what's, yeah. So at low velocities, it's not, that is not the dominant contribution to this cross-section. What's going on here is that there's a long range interaction mediated by the W bosons and that is increasing the cross-section by a large factor and it's a mass, and it's a factor that depends on the, and it's a factor that depends on the mass of the particle. If you like, I can definitely talk you through the full parametrics of the sum of old enhancement later on. But yeah, the lesson to take out of this is that once you have dark matter coupled to a force carrier much lighter than itself, just like if you were looking at the annihilation of an electron and a positron, it would be a terrible approximation to assume that there was no electromagnetic interaction between them, right? I mean, if you just ignored the Coulomb interaction completely so they weren't attracted to each other at all, you would get a really wrong answer. Just writing down the tree-level cross-section to this is completely analogous to ignoring the attractive force between the positron and the electron. Once you take this into account, your answer is much larger and it has quite a different behavior, as you can see. All right, so now let me, okay. So that's the easy side of things. This is the one line signal that we have at the moment and the line searches that we have at high masses. There are also line searches from Fermi at lower energies. They're just not very constraining on most dark matter models because as we've mentioned earlier, the predicted line signals tend to be several orders of magnitude lower than the continuum signals. All right, so by continuum signals, I mean dark matter, annihilates, the standard model particles, they decay and somewhere in their decays they produce gamma rays. But then those secondary gamma rays do not have to be at the dark matter mass or at half the dark matter mass. They have a broad spectrum, as we saw earlier. Okay, but if these continuum gamma rays are four orders of magnitude brighter than the lions, then we need to figure out some way to go after them if we're going to do maximally sensitive searches. So then the question here is what are the backgrounds? Especially if we're looking at somewhere like the galactic center. If we're looking at continuum gamma rays and the galactic center, we expect it to be perhaps the brightest signal we'll ever see, but the backgrounds are large. I can skip a lot of this because it was just covered in the cosmic ray lecture, but the dominant backgrounds to high energy gamma rays come exactly from what was discussed. In Piero's lecture, they come from cosmic ray protons striking the gas producing neutral pylons which decay into gamma rays. They come from cosmic ray electrons scattering on the gas through Bramstrahlung or upscattering starlight to gamma ray energies. And there are also compact sources, supernova remnants and pulsars which can produce gamma rays in this energy range as well. Now as a general statement, you can say that these backgrounds should roughly trace the distributions of gas, of starlight, of supernovae or so. And then these are all more common in the disc of the Milky Way. So as a first approximation, we expect this background to look like the disc of the Milky Way. And that's what you saw in the Fermi data that Piero showed, the disc, the galactic disc leapt out very brightly. The difficulty in modeling these beyond that is that while we understand all these physical processes pretty well, we don't know the full distribution of gas and starlight and cosmic rays. And the modeling of these things, particularly of the cosmic ray propagation is quite challenging. I'm just gonna skip over that. This is just making the same point again. This is, so this is an all-sky map of the gas in the Milky Way integrated along our line of sight. So this is the galactic plane along here. This is the galactic center in here. You can see the galactic center is actually not hugely brighter than other regions along the plane, but it's not dim either. So this is a sphere that's been stretched out onto this rectangle. This is the gas distribution. The gamma rays look like this. They look very much like the gas distribution. They have the same high brightness along the galactic plane. So we can build a model for these background gamma rays. The way that you need to do this, you need to incorporate maps of the gas and models for the cosmic rays, as Piero talked about earlier, and the radiation fields. There are some public models for this gamma ray distribution that have been made available by the Fermi collaboration. What I'm about to show you actually uses a pretty early version of those models because the later versions are very much designed to look for point sources rather than to look for diffuse excesses. So just to give you a hint of how relatively simple models can subtract off these gamma rays. So let's suppose we just take, so this is all the Fermi data with point sources subtracted from it. Black here means more gamma rays. White here means fewer gamma rays. This is the galactic plane along here. This is the galactic center in here. So we could take a model of the background gamma rays built by the Fermi collaboration based on estimates of the cosmic ray populations and subtracted, and we get a model that looks something like this. This is the model, this is the data. You can see they look pretty similar and to a first approximation, the reason that they look pretty similar is because both of them look like the gas distribution because most of the gamma rays come from cosmic rays hitting the gas. If we subtract one from the other, then we are left with something like this residual map and as Piero said earlier, over most of the sky and especially at high latitudes, this really does quite a good job of subtracting off the known gamma rays. There are a couple of exceptions. One which I'm not gonna talk about in depth here, you may see these figure eight structures in the gamma ray map. These are called the Fermi bubbles. My collaborators and I discovered them in 2010. It's still not fully understood what they are. They don't have anything to do with dark matter, but they're thought to be the result of some outflow from the galactic center. Even once you move away from the Fermi bubbles, you'll see right around this galactic center region and along the galactic plane. So bright white here means the model is over predicting the data. Black means the data is higher than the model. See, this isn't doing a fantastic job. Nonetheless, we can ask, all right, if we take this model, what's left in the data that isn't in our model? One approach that's commonly used in this sort of study is called template fitting, which is very simple. It just means that we're going to model the sky as a linear combination of different spatial templates. So in this case, for example, if I were searching for, if I wanted to look for a dark matter signal around the galactic center, I might say, okay, I have my background so that if I have my model for the diffused background, which is this top figure, I have a very oversimplified model for the Fermi bubbles, which is literally just produced by drawing, by drawing around the edges of the bubbles and then filling it in. And if I had reason to think that there was a dark matter signal at the galactic center, I could add in a template like this, which is an NFW profile squared integrated along the line of sight and send it to the galactic center. Then I just perform a fit. I say, all right, let's take this data, model it as alpha times this map plus beta times this map plus gamma times this map and fit for those coefficients independently at each energy. If I do this, then what I find for the coefficients of each of these templates is something like this. This is an old figure, don't take the details too seriously. But for the emission correlated with this diffused background model map, we find a spectrum that looks like this. You can see this bump here is the pion bump that Piero talked about, just extracted directly from the data just by asking one of the gamma rays that are correlated with the gas. And then it falls off like a power law. This sum, this blue spectrum, there's the spectrum of the Fermi bubbles. You can see it's kind of brighter at high energies. And this dark matter like template picks up this bump peaked around a few GB. That's actually an indication of another feature in this map which was first pointed out in 2009. And it appears that in the galactic center but also extending out to about 10 degrees away from the galactic center, there is excess emission over these model over these diffused models in the range of a few GB. This is one of the paper that my collaborators and I wrote in 2014. So this is power polygorithmic interval. Here is the y-axis. Energy is the x-axis. This is a gamma ray spectrum against. We're looking at the total of the photon flux. And you see that we appear to have this huge EV bump. This is not a spectral line. A spectral line would just look like a sharp peak in the middle of this bump. Fermi's energy resolution is five to 10%. So it's much better than this. So this is a broad spectrum from whatever source. Now you might reasonably ask, well, hang on, you just told us that that diffuse model doesn't work. So, well, you just told us that there are problems all along the galactic plane. So why should I believe that this is anything more than something that you're not modeling properly in the diffuse emission? It turns out you can do some tests of this. So these authors in 2014 did an analysis where they looked at other regions along the galactic plane to get an estimate of how large the errors generally were due to the diffuse emission modeling. And they used that to put a systematic error on the spectrum of this excess. Those systematic error bands are the yellow bars on this plot. The black error points are the statistical error bars. These are also just statistical error bars up here. So, you know, I mean, the systematic error bands are large compared to the statistical ones, but there's still apparently this hint of an excess around a few GB. You can also break up the region around the galactic center into many independent regions. And you can ask, all right, what's the spectrum of the excess in each of these independent regions? Does it appear there consistently? This is like the morphology study that I mentioned doing for the 3.5 KV excess. I think, does this look like annihilation or decay or like something else? What you see in this case is that this bump at a few GB appears pretty consistently in each of these regions independently out to about 10 degrees from the galactic plane. And there may be hints even further out. The spectrum out in these outer regions also agrees very well with the spectrum that you see right at the galactic center. So that's interesting because that's not just what you would have expected if you had just gotten your model of cosmic rays slightly wrong, some complicated way. The Fermi collaboration, so what I showed you before was all done by people outside the collaboration. The Fermi collaboration also had an analysis of this. And they found very similar results to everybody else. So there appears to be this robust statement that in the Fermi gamma ray maps, there appears to be an excess of GB scale gamma rays around the galactic center that isn't in any of the diffused models. Okay, now for the moment I just want to indulge in, yeah, sorry question. The formal statistical significance of this excess, yeah. Okay, so I'll give you the number and then I'll tell you why you should take the number with a grain of salt. The formal statistical significance of this excess is about 40 sigma. There are something like 30,000 photons attributed to this excess or more if you take a larger sample. So this is not just a statistical fluctuation on the diffused model. That you can say is absolutely sudden. 40 sigma, I mean, it means it's not a statistical fluctuation. It doesn't mean that it's dark matter, for example. It doesn't tell you anything about interpretation. It just tells you that it's not a fluctuation on the known background. Okay, but yeah, the formal statistical significance is extremely large as you, I mean, as you can infer just from looking at these data points, right? Like if you take these statistical error bars seriously, each of these points is between five and 10 sigma away from zero. Sorry, the question is, yes. So the question is, you see other features along the plane that are not natively included in the diffused model. What's the difference between those and this? Is that correct? So the difference between those and this excess is, if you look at those other, so and this is why those systematic error bars don't intersect with zero, this excess has a, well, part of it is it has a very different spectrum. Most of the other excesses that you see along the plane, the spectrum looks very much like the spectrum that we saw earlier from protons hitting the gas and producing gamma rays. So they can be pretty easily explained by just saying, okay, maybe there's a clump of gas here that I didn't know about. Okay, they can just purely be explained by that. This can't be because this morphology is not characteristic of any of the gas in the region that we know about. And even if we did say, oh, well, maybe there's this big spherical clump of gas in this region that we didn't know about, which isn't impossible. It could happen. The spectrum of these gamma rays is not like the spectrum of the ambient protons hitting that gas, okay? So that's basically why these systematic error bars don't intersect zero, because you look everywhere else in the sky. You do see things that we can't, things that are not in the diffused model, but in general, you can explain them pretty easily by just saying, okay, we've slightly mismodeled the gas in this direction. It's much harder to do that with this excess. There's another reason, but I'll get to that in a second. Okay, so I just want to indulge speculation for a moment because it's a nice example of how you would do an analysis of a dark matter signal and say what if this was dark matter? What would it tell us? So if it's dark matter, then you can immediately learn a fair bit about the dark matter particle. From that spectrum, you can say that the best fits are for fairly light dark matter between about 10 and 100 GV, annihilating into fairly light particles. The overall best fit is around 35 to 50 GV, annihilating into big walks. As you also get reasonably good fits for even lighter dark matter, annihilating into towels. Heavier dark matter can provide an okay fit, but it's not as good. So these plots are showing cross-section versus dark matter mass, cross-section versus dark matter mass and the different colored contours of the different annihilation channels. This was preferred by the access. Now, if we look at this cross-section, the thermorellic cross-section, two or three times 10 to the minus 26, is right about here. So this best fit region for big walks also happens to require almost exactly the same cross-section that you would need to get the right relic density. As we've said, that's not compulsory. The present day annihilation cross-section and the cross-section of freeze out don't have to be the same, but it's nice, but it's nice to see it. Okay, so this is the first thing you do. You look at various final states, you look at various masses, you ask what does this tell us about the dark matter. Then you go on and say, well, okay, but so now we want 40 GV dark matter, why haven't we seen it at the LHC? Why haven't we seen it in direct detection experiments, which I'll talk about next time? Direct detection is extremely sensitive in this mass range. This is its best region. Why haven't we seen it already? So then that, if this was dark matter, this would give you much more information about dark matter. Tells you the direct detection rate has to be suppressed. That gives you information about its interactions with the standard model. It tells you there are a few different possibilities shown here. If you look at bounds from colliders, it might tell you that the dark matter interacts through some very light particle that tends to reduce collider sensitivity relative to indirect detection. One set of explanations that does a pretty good job of explaining this and generally makes indirect detection better relative to direct and collider constraints is if the dark matter interacts with an intermediate particle and then that intermediate particle interacts with the standard model. That extra step suppresses the rates in colliders and at direct detection but leaves the indirect detection constraint untouched or can even increase it. So these are some examples of models that people have written down to write the excess. One is a supersymmetry style model where you have weak scale, dark matter. It couples to some pseudo, this is some new light pseudo-scaler which in turn couples to be quarks. Another class of models, this is the one that I mentioned. Dark matter talks to some mediator particles, so the dark matter annihilation is not to the standard model but to this mediator and then the mediator subsequently decays into standard model particles. So if this signal were dark matter, it would push us in these kinds of directions. It would give us an enormous amount of information about the dark matter mass and about its interactions just from the fact that we haven't already seen it in a number of other channels. Of course, the question is, is this actually dark matter? What are the possible backgrounds? I've argued to you that it doesn't look like just a clump of gas in this region. But, well, here's the spectrum that I showed before. Now with a couple of extra lines on the plot. This dashed line is the spectrum of gamma-ray pulsars observed by Fermi. You will note that it is quite close to those data points except maybe at low energies. So maybe what we're looking at here is some population of pulsars. There's no particular reason to expect pulsars to be distributed like this symmetrically about the galactic center and looking like an NFW squared profile. But that just means it's unexpected. It doesn't mean it's impossible. I think people in this room have also worked on the other possibility. Maybe what we're seeing is that the cosmic-ray population in this region is totally different to what we expected. Maybe there's some outflow of cosmic rays from the galactic center. So, again, it could be from protons striking the gas or electrons and photons experiencing inverse-competence scattering. Here are some references on this to read up if you're interested. These explanations have a bit of trouble making the excess mostly because it looks so very symmetric about the galactic center and its spectrum doesn't seem to be changing much as you move away from the center. But the question of pulsars. Let's go back to this. I'll tell you about a new technique that is, so this is another example of clever data analysis in indirect detection can get you a long way. In particular, you can use information on photon statistics. You can use this kind of study also when looking at isotropic gamma-ray backgrounds or isotropic backgrounds in other frequencies from the rest of the galaxy. The example I'm going to show you is related to this specific gamma-ray excess. So, if what we want to distinguish between is a smooth dark matter distribution, is, again, a morphological study and a collection of pulsars, are pulsars of gamma-ray point sources. So, what we might expect is that in the pulsar case we would have more individualized hot spots but also more very cold spots relative to the dark matter case. Now, if the pulsars were bright enough that we could see them as individual sources, we would be done. We would never have been having this conversation about this as a potential dark matter signal. So, if this is the explanation, it has to be a collection of many fairly faint pulsars. Fortunately, statistics can come to our aid when our eyes fail us. So, what we can do here, so the important thing to note here is that unresolved sources are essentially a source of modified statistics. But here's the toy example. Suppose I tell you on a view, I say, all right, in this region of the sky, we're in this dark matter search, I'm going to predict that you should see 10 photons over a certain period of time. Okay? And then I ask you, all right, what's the probability that I see 12 photons in that period of time? Now, absent any other information, you might say, you know, knowing about the Poisson distribution, all right, I am going to use the Poisson distribution, say what's the probability of seeing 12 when I expected 10? And if you do this, you will tell me that the answer is there's about a 10% chance that I will see 12 photons when I expected to see 10 photons. If I ask you what's the chance I'll see 100 photons, you'll say it's never going to happen because the Poisson statistics, it really is never going to happen. The probability is about one in 10 to the 63. Likewise, the probability of seeing zero photons is small, but not miniscule. But now suppose I give you an extra piece of information. I tell you, oh, when I said, I expected you to see 10 photons. What I meant was that's true on average, but really what I meant was that there's a 10% chance of seeing a bright source, which would produce 100 photons and there's a 90% chance of seeing nothing. Now still, the expected number of photons is 10. The average number of photons from this is 10. But now, but it modifies now the probabilities. Now the probability of seeing zero photons is 90%. Now if I ask you what's the probability of seeing 100 photons, you have to first ask what's the probability of seeing a source? That's about 10%. Then what's the probability of seeing 100 photons when I expected exactly 100 photons? So the effect of having sources here, without knowing anything about exactly where the sources are, all I know is a probabilistic distribution of what's my chance of having a source and how bright should it be in this pixel. The presence of these sources is really going to change my probability of seeing a given number of photons but the same average number of photons. It's distorting the relationship between the mean and the variance. Does that make sense? Any questions at this point? Sorry, can you just say again? Is it like having different sources? With pram? Yeah, so right. Okay, so the question is, are you equating sources with essentially different priors in this statistical calculation? Not exactly, what are we going to do? So typically when I say, I said earlier, our model of sky is a linear combination of these templates and then I'll do a fit. So to do that fit, what I need to do is write down the likelihood of the data given some model. When I write down that likelihood, the way that I typically do it is I say, okay, my model predicts 20 counts in this pixel or whatever as a function of the model parameters. For some set of model parameters, it predicts 20. What is the probability, what is the Poisson probability of seeing the observed number of counts when I expect a 20? That's what we're modifying. We're changing that relationship from a Poisson distribution to something like this doubly Poisson distribution. We do need to put in information on what the distribution of, on how many sources you have at a given level of brightness. We include those just as parameters, as additional parameters are now fit. Describe the source distribution, okay? Does that make sense? So it's not, I mean, it's modifying the priors in that we're adding additional parameters to describe this source population and they all have priors on them, which are not, this goes to zero. But really what we're doing is modifying the likelihood calculation. And this was not worked out for the first time by me, my collaborators, I should note. It was pioneered for gamma-ray data, isotropic gamma-ray data in 2010 or 11 by Malyshev and Hogg, so. And it's been used in X-ray data before that. Point here is, yes, statistics are your friend. So what we can do then is again repeat this template fit. But now instead of just saying, well all right, the probability of the data given the model is determined by Poisson statistics, we allow some of these templates that we're adding in to the linear combination to have different statistics, to have different predictions for the distribution of expected number of photons, well, for the distribution of how many photons, for the likelihood of how many photons you should actually see given an overall expected number of photons. And then we can ask if the fit prefers to have these point-source populations in it or not. So these are the point-sources we're really interested in. These are point-sources distributed like how dark matter-like signal over here. We wanna know does the fit actually want there to be sources like this or does it prefer to only have dark matter? But there are other point-sources in the sky. There are point-sources distributed along the disk. There are extragalactic point-sources, so we need to add those into the fit as well. So when we perform this fit, we do this twice. Once we do it in the case where we don't put in any point-sources clustered around the galactic center distributed like our possible dark matter signal, these plots are showing the posterior probability of associating a certain amount of the flux in some region with each of the templates. This inset is the case where we don't have any point-sources clustered around the center and this red histogram shows the flux associated with the dark matter template. So in this case for this particular region, the flux that we're calling dark matter-like flux is about seven or 8% of the total. But now if we allow the fit to have this template of point-sources clustered close to the galactic center and we ask is that preferred or not, well this red histogram here is still the posterior probability distribution for the flux associated with the dark matter and you will see that it is now peaked at zero. So what the fit is telling us is that when you allow these point-sources to be in the fit, the preference is to have no smooth dark matter-like contribution. Instead that seven or 8% of the flux that was previously getting associated to our dark matter-like signal now gets associated to this population of point-sources clustered around the galactic center that we inserted. Okay, so yeah, but bottom line of this is that if you trust photon statistics, it looks like this excess of gamma rays. You can't see it by eye, but at statistical level it looks like it is more clumpy than smooth and thus it looks more like we may have discovered a new population of pulsars than we have discovered dark matter, which is sad because if it was dark matter we would learn a huge amount from it. Questions about this? Yeah? Yeah, so the question was could it be clumps of dark matter? Yes, so always important to remember what your analysis can tell you and what you can't. So this is just distinguishing smooth emission which could be coming from cosmic rays in the gas or could be coming from dark matter, annihilation or decay. All it's doing is distinguishing that smooth emission from emission that looks point-like to Fermi. Recall that Fermi's angular resolution is about 0.1 degree. You don't need to be that small to look like a point to Fermi. So indeed, we could be seeing small clumps of dark matter or if you prefer astrophysical explanations small clumps of dense gas. That said, in cold dark matter models something that I didn't really mention earlier was this signal rises pretty steeply towards the galactic center. If this was really dark matter then the Milky Way doesn't have a core. It has quite a steep cusp. To get the number of the density of dark matter clumps to rise that steeply towards the center is quite difficult because normally the small clumps will get disrupted as they get close to the baryonic disc. So in general, you would expect the number of clumps to either fall towards the center or rise towards the center more slowly than the overall dark matter density. But this has to look like an annihilation signal. So it has to be faster than the dark matter density squared. So it seems hard to do this with dark matter clumps but that doesn't mean it's impossible. It just seems hard. Okay, so I have another lecture tomorrow and I realize it's 5 p.m. and everyone wants to leave. So I'll just say, okay, so I will finish talking about the direct, indirect searches and then I will tell you the last bit about the CMB at the start of tomorrow's lecture. But what I wanna say last is, okay, so we've talked a bit about lion searches. Why they're interesting. We've talked a bit about the 3.5 kV lion and what it might tell us. We've talked about doing continuum searches in the galactic center and I hope I've given you a demonstration both of how you, what you might learn from seeing such a signal but also that the backgrounds in the galactic center are pretty challenging. And often you need clever analysis techniques to distinguish signal from background. But so let's go back to what I was talking about earlier to the dwarf galaxies to where I said previously the backgrounds are fairly low. If you wanna get away from the possibility of finding whole new populations of hitherto unsuspected pulsars around the galactic center, where do you look? So these plots are showing the studies of dwarf galaxies using again data from the Fermi Gamma Ray Space Telescope. This black lion shows their constraint and this is for two channels. This is for dark matter annihilating your bequarks and dark matter annihilating to tau leptons. Now the thermal relic cross section is about around here. So you see and everything above this black lion is nominally ruled out. So you see that if you take these limits here, if you take these limits seriously, this tells you that thermal relic dark matter is not lighter than about 100 GV of annihilates to bequarks. So yeah, actually the dashed lion here is showing you the thermal relic cross section. Now there are some uncertainties in these constraints like the systematic uncertainties from the galactic center. So there's systematic uncertainty just from not knowing how much dark matter there is in the dwarfs. But if you see, I just put an overlay on this plot, these green and purple lions, that's the supposed systematic uncertainty in these bounds due to the uncertainty on the dark matter from these stacked dwarfs. So it's not a very large uncertainty. These little ellipses here show the best fit regions for a dark matter explanation in the galactic center. And you can see that with these uncertainties in the dwarfs, they're not totally ruled out, but they're getting pretty close. So this, despite the fact that the dwarfs J factor is so much significantly lower than the galactic center, there are just fewer annihilations happening there, and they're further away. It's still true that because the dwarfs are so low background by looking at them, you can set constraints that are very comparable to the kind of signals that you would see in the galactic center. And these are, I believe, currently the best constraints directly on the annihilation cross-section of weak-scale dark matter. Okay, so let's pause there for the day. The start of next time, I will give you, it will be much short, this was the long case study. Start of next time, I will give you the short case study, which is an example of how dark matter can affect the CMB, not through its gravitational effects. And then I will go on and talk about terrestrial such as for dark matter. Thanks very much. See you tomorrow.