 Okay, so it's Friday afternoon on the second week, second and last week of the school, so I'm glad you're still all with me. So last time we went through the standard case of supersymmetric dark matter, the case where the dark matter has essentially freezes out and then after it freezes out, not much else happens except perhaps for the occasional interaction of the dark matter with a nucleus in your detector or through the occasional annihilation in the galaxy. So since I said we are going to move into the question of looking at theories of dark matter beyond the WIMP, I want to spend the first part of this last lecture looking at what it is that you can and cannot do with a dark matter model. So the reality is that even though one might think that the dark matter sector is remarkably unconstrained and you can just do whatever you want, it's simply not the case. So I want to give a few examples of how that works. So you'll recall that at the beginning of the first lecture I put up this picture of the universe from the time and its infancy from the first measurement at BBN through the dark ages, large-scale structure, supernovae and then the galactic rotation curves. And the thing that I wanted to emphasize is that each of the measurements at these different epochs tells us something different about the nature of the dark matter. That they're very complimentary probes that put different kinds of constraints on the nature of the dark matter. So let's just go through each one of these epochs and talk about one thing that we can learn about the dark matter sector. So this was actually mentioned in the question session this morning when I said that one of the ways that you can constrain Gravitino dark matter, for example, is from the fact that you can't inject a lot of ionizing radiation late in the universe, in particular when the universe is more than a second old. And the reason for that is that if you look at the helium abundance and the deuterium abundance and you take the measured baryon-de-photon ratio, you take the measured abundances of these guys, you can see that they line up very well with what you get from the CMB. Now the other thing I said is so these bands here, the prediction from BBN and the white areas are the observations. And you'll notice here for the synthesized amount of lithium, you'll see it's a little bit low in comparison from what you would predict from BBN. So one thing that people have said is, well, maybe we can actually try to destroy some of that lithium through a late decaying particle. So one thing you could say is, well, maybe we just don't understand, we haven't counted up the amount of lithium in the universe very well. The other thing you can say is, well, maybe there's some kind of dynamics that could actually destroy this lithium. And in fact, people have done this calculation. So this is a very nice paper that plots the energy in dark matter, well, the energy times the number density over the entropy as a function of the lifetime of the particle in seconds. So BBN being the critical epoch puts some constraint on this abundance versus this lifetime. And you can see that it has different constraints depending on which element it is. But you'll see also that these are really strong constraints. So you just cannot afford to inject much ionizing radiation at late time. So in general, whatever is happening in your dark matter sector in terms of boosting additional energy, in particular, into the standard model sector, you have to be very careful what you do. On the other hand, in the dark matter sector, if you had some annihilation into dark states that didn't produce any ionizing radiation, in that case, you're pretty much free to do not anything you want, but you're at least less constrained than you are if you go to the standard model. Questions about this one? So let's move on to the next step box, CMB epoch. So we said the CMB epoch, the heights and locations of these peaks actually give us an enormous amount of information about both the baryon and the dark matter density at the epoch of matter radiation equality. So we're talking between three and 400,000 years old. And in particular, we said that the separation between these peaks tells you about the baryon to photon ratio, and the rate at which you damp these peaks tells you in particular about the baryon density. I'm oversimplifying here, but just to give you kind of the overall picture. So what this actually tells you, the shape of these peaks actually tells you that the dark matter has to be decoupled from the baryons at the CMB epoch. So why might that be relevant for dark matter? So let me give you an example. Dark matter coupling to the baryons through a millicharge. Just going to give the dark matter a really small coupling to the photon. Stay through a Stuckelberg mechanism. I suppose it doesn't matter that much, exactly how you do it. But let's suppose you have some mechanism for coupling the baryons to the dark matter. That would actually change the peak structure that you see at the CMB epoch. And the reason why this is a particularly powerful constraint is if the mediating particle is very, very light in comparison to the momentum transfer, that scattering cross-section is just a Rutherford scattering cross-section. A Rutherford scattering cross-section is enhanced as one on velocity to the fourth power. And so we also know that as the universe expands, the dark matter cools as the universe expands. So the longer that you wait before structure forms, you get this enhancement that goes like one on the velocity to the fourth. So what can happen then is you can get it potentially in an enormous scattering cross-section. And that allows you to put a constraint on the effective charge of the dark matter. So here I'm actually parameterizing things in terms of the dark matter charge. You could just put in any sort of light force mediator that would couple the dark matter and the baryons. But what you learn from the fact that the dark matter cannot be coupled to the baryons at recombination. So at CMB at Bach, you learn, well, depending on what the dark matter mass is, let's pick up a one GEV dark matter particle that puts a constraint on the dark matter charge, the coupling to the photon, which is on the order of 10 to the minus six. It's actually very strong constraint. So that's CMB at Bach. Let's keep going. CMB up back is the period of recombination. So that's when the electron and the proton join together. And now you have a neutral atom, neutral hydrogen atom. And you can actually use the fact that we know until the universe actually becomes quite old. When you start forming stars, when you start forming stars, then you can get ionizing radiation that actually reionizes the universe again. But we know that between the CMB at Bach at between three and 400,000 years old. And today, there was a long period in which the universe was just dark. Okay, and so you don't want to mess that up. You can't have dark matter come in and produce ionizing radiation that would change this. So you can take convert that into a constraint on dark matter annihilating to any form of ionizing radiation. Dark matter producing ionizing radiation would actually lead to a suppression in the height of those peaks. Very typical type of suppression. And so as a result, you can put a constraint on the rate at which dark matter annihilates to anything that produces ionizing radiation. And you can translate that into an annihilation rate. So that's what's shown here. This is this factor F is an ionizing efficiency. So it varies between about point three and one, depending on what exactly the process is that you're annihilating into the thing relevant thing to know is that it's order one. This is the annihilation cross section. Now this plot is a little bit old. So what it has is what's ruled out by W map five versus the plank forecast. The true plan constraint actually lies somewhere in between these two curves. And then can someone remind me again what the thermal annihilation cross section is three times 10 to the minus 26 centimeter cube per second. I've asked how important this number is that I've asked you this twice. Alright, so you pick up three times 10 to the minus 26 centimeter cube per second and you follow it out. And what you end up finding is that the constraint on the dark matter mass, if you have a thermal dark matter particle is that it has to be heavier than about 10 GeV. Now incidentally, does anyone can anyone figure out why this curve has this scaling? What might be setting that? What? No, not the Hubble parameter, even simpler than that. Density the dark matter. So you fix the total energy density in the dark matter because that's what's observed. So as the darker mass of the dark matter goes down, the number density, the dark matter goes inversely up. Okay, and so because the annihilation rate just goes like the number density squared. Okay, so what happens then as the dark matter mass gets lighter, the number density is going up. And that increases the rates. So that's the reason why as you go to lower masses, you get a stronger constraint. It's really just this fact that your number density in dark matter goes up. And so as a result, this actually for thermal dark matter having an S wave. Now S wave annihilation means that there's no dependence on the velocity. Okay, the reason why that's important here is because at CMB epoch, dark matter is very cold. In comparison to what it was when you said it's relic abundance. But you can see that this actually constrains that the dark matter has to be heavier than about 10 GeV if it is a thermal relic with a velocity independent annihilation cross section. So that's a very interesting piece of information. Questions about this looks like it's Friday afternoon. Okay, let's go on to the next epoch large scale structure. So as many of you know, and as what we've been talking about, it's been observed through gravitational lensing that dark matter halos in general are not completely spherical. So when you just take gravity plus dark matter, what tends to happen is that you get a triaxial type shape in the halo. So the halo doesn't look exactly spherical, looks more like a squash beach ball. Now the reason why that's important for the properties of dark matter is that if you had strong dark matter self interactions, what that would be, what that would do would be to randomize the directions of the momenta. And if you randomize the directions of the momenta, that's 10 going to tend to make the halos more spherical. And that would be in contradiction to what's observed in some of these triaxially shaped halos. So what does that look like? Well, I can pick up some object. So this is this constraint is taken from NGC 520. And I can say, well, if I don't want those dark matter momenta to be randomized by self scattering, I'm going to require that there be fewer than one dark matter self scattering over the age of that halo. That should conservatively allow me to state that the dark matter halo should be relatively undisturbed from what gravity by itself would do. So here again, we can take a massless force mediator. So now it's not coupling to, it's not coupling to photons, it's not coupling to baryons, but it's self coupling. So I imagine I look at the self coupling self scattering through a massless mediator, I want to find out what the constraint is on that self coupling. Okay, so that's what's plotted here on the vertical axis. This is the analog of alpha electromagnetic, but for the dark matter plotted as a function of the dark matter mass. And you can see that the fact that these halos are not exactly spherical. Let's pick up again, one g e v dark matter puts a constraint on the self coupling of the dark matter the analog of alpha electromagnetic, that it be smaller than something like 10 to the minus seven. At least that's about what it looks like from this angle might be 10 to the minus, it's about 10 to the minus seven. Okay. Any questions about that one? Yes, gravitational lensing. Yeah, so you can put. So you just made measure the major and minor axes of the things. And so you get some shape ratio between those axes, and then you can compare them against simulations. And it's not a very big ratio, we're talking about 10 20%. But this has actually been observed in some of these halos, through gravitational lensing, typically in clusters, because of the big enough objects that they lens well. Yes, what is the reason that ellipticity gives a stronger bound than bullet cluster. So this has to do with the details of these different objects. So when you're talking about two objects, so in the bullet cluster, you're talking about two objects that are passing in between each other. And so you're making some assumption about the relative velocity of those two things, and about how long they stay overlapped with each other, which is a small fraction of the total lifetime of the objects. Whereas with this object, you're leaving it in equal the room, and you're just allowing it to sit there for the entire lifetime that the object sits there. Okay, so you're integrating over a longer time, in particular. Okay, that said, all of these constraints, you should put an order of magnitude error on these constraints, at least. Okay, so when you put these things into n body simulations, you easily gets scatter and the constraints that can be one to two orders of magnitude. Okay, so these should all be taken as tilde level things. You shouldn't try to go like three, four orders, five orders of magnitude beyond these. But in order to magnitude above these constraints, I would still call that fine. Okay, so and part of the reason for that is that we're still trying to understand how exactly in these objects, the baryons affect this picture. So in particular, in the centers of these objects, you're going to have some effect from the dark matter self interactions and from the fact that the baryons actually change the density profile in these things. So it's not, it's not a hard boundary. This is still unfortunately in art, or fortunately, which means that there's more work to do. Yes, they do not all have this, this has not been observed in all of them. So I think, if you take a sample of about six of these clusters of galaxies, about a third of them have this observed eliticity. Okay, so, so, so no, it's not a feature of all of these clusters of galaxies, but then there's also the question of mergers, for example, entitled disruption. And so, so here again, that's a piece that's coming into play here. This is one of these reasons why these are not very clean laboratories. Other questions. So moving onward now, so we went from BBN epoch at one second to CMB epoch at three to 400,000 years to the era of structural formation. So we're talking about a billion years onward to finally getting astrophysical objects. Now, the reason why astrophysical objects are important is because their giant gravitational potential wells. So that means that they're potentially good way to collect dark matter whizzing around these objects. So in particular, one of the most effective ways to collect dark matter is to have it come in, scatter off one of the nucleons on one of these objects that loses energy when it does that. And if it loses enough energy, it can become gravitationally trapped. So you can compute the rate at which that happens. And then once it's trapped in the center of this thing, then you just can wait for the thing to thermalize with the nucleons and the object. And as it does that, it just sort of slowly sinks to the center. So you can compute the thermalization time for this process to happen. And what you end up with is a little sphere of isothermal dark matter sitting in the center of one of these objects. And the sphere of that isothermal dark matter is going to depend on how deep that gravitational potential well is. And also what the typical temperature is what the typical kinetic energy of the dark matter is. So you can compute the rate at which this happens. So the rate at which you collect dark matter depends on the capture rate. And when you have an annihilation process, it goes, you can deplete the dark matter in the center of this object that just goes proportional to whatever the annihilation rate is times its number density squared. Now what you find is that for typical kinds of dark matter particles, so for example, WIMPS, asymptotically, what happens is that the capture rate and the annihilation rate come into equilibrium. So n dot asymptotically goes to zero. So for example, if I look at a WIMP in the center of the sun, what you'll find is that for even modest scattering cross sections, which is to say modest capture rates, that I'll end up in equilibrium. And so you can solve this equation in the equilibrium limit. And what happens is that it approaches some constant number with some typical timescale, which is set as you might guess by the capture rate times the annihilation rate. So you can plug this in. This has been known for a long time and compute what this capture rate is in order to get the annihilation rate, which this is going to depend on whatever the density of the dark matter is, the velocity because that sets the flux, the dark matter mass because that sets the number density. And then some things like, for example, the scattering cross section. So for example, in an object like the sun, you're dominated by hydrogen and helium. And so you're actually dominated in this capture rate by the spin dependent scattering cross section off of hydrogen. So you can put that in and compute a capture rate. So this is pretty well known. And in fact, this is what ice cube does looking for dark matter. It looks for dark matter to annihilate to standard model particles and then to eventually produce neutrinos, which you could observe in an experiment like ice cube. You have a question. Because right, so flux is going to be proportional to the number density. And then I want to know I pick up a dark matter particle and I have some flux of dark matter particles coming at me that flux goes like n. And then I have my dark matter particle sitting here and that the targets goes like n and the flux goes like n. And so the rate ends up going like n squared. Any annihilation rate is going to go like n squared. Sorry if it had what that would still go like n squared, still go like n squared. So you always have to ask yourself what is the question that I'm trying to answer. So for dark matter self interactions, if I'm just trying to calculate I pick up a single dark matter particle and I want to know I want this dark matter particle to have a single scattering in its lifetime. That goes like n because I want every single dark matter particle to have that happen. But if I just want to know I have a population of dark matter particles and I want to know statistically how that behaves, it's typically n squared. So you kind of have to ask what question is it that I'm asking? So this is the story for the dark matter which is annihilating. And in fact, as I said, IceCube is an experiment that looks for dark matter annihilating in the sun and eventually having a decay product which is a neutrino. On the other hand, if annihilation does not occur, the consequences can be, in some cases, dramatic. And in fact, this was the reason, even back in the late 70s, that people first thought about the idea of a dark matter, anti-dark matter asymmetry. They were interested in modifying the properties of the sun. They wanted to solve the solar neutrino problem. Well, in a modern realization, the sun isn't as interesting anymore because we've solved the solar neutrino problem. But there are other places where it becomes important. So what if annihilation does not occur? In that case, you just keep accumulating this stuff. So the number that you've accumulated in the center of this object just goes like the rate at which you're collecting it times the integrated time. And so you can compute the total number of dark matter particles that you have for a given dark matter density, interaction cross-section, and then however many years you decide you're going to wait. And you'll notice in particular that you get a lot of these guys if you wait a while for 100 GeV dark matter. So this, because dark matter particles, when you measure them, you know, if you convert GeV to kilograms, okay, it doesn't end up being very much mass. But if this cross-section is large enough, it still may have an impact. So if you just do a conversion of 10 to the 57 GeV per solar mass, you can compute how small the solar mass this is. The thing that's remarkable is that the dark matter is not a fermion, but it's a scalar. It doesn't have any Fermi degeneracy pressure to hold it up. And if it doesn't have any Fermi degeneracy pressure to hold it up, it can actually form a black hole. So what does that look like? Well, in order to form a black hole, you need two things. One, you need to think to self-gravitate. Okay, if the gravity is still dominated by whatever object you happen to be in, then you're never going to form a black hole. Because, for example, if I'm in a neutron star, neutrons have Fermi degeneracy pressure. So they can actually hold the dark matter plus neutrons up. On the other hand, once I get enough dark matter particles that they start to self-gravitate, they don't care about the neutrons. They're just going to do what they're going to do. And if there are scalar particles, then they can collapse. Then the second thing that you need is to actually exceed the Chandrasekhar number. And then at that point, you're going to form the black hole. Well, Chandrasekhar number, you just balance the gravitational potential energy against the zero point energy, which is set by whatever the thermal radius is of the dark matter particles. And you can compute the Chandrasekhar number. And this is what you get, 10 to the 34. You notice that this is a small number in comparison to how many you can accumulate. Even for very modest scattering cross-sections off of baryons. And so that actually allows you from the existence of neutron stars. So we see neutron stars typically from the pulsars. So these rapidly rotating neutron stars actually produce these jets of light. And that allows you to put a constraint on the scattering cross-section off of baryons. So here we've projected on to the nucleon plane as a function of the dark matter mass. And just for illustration, this is what a typical dark matter experiment can do. So this is a CDMS direct detection experiment. And this is what you can actually exclude for a scalar particle having a particle anti-particle asymmetry. You can see that you can exclude a vast region of parameter space that would not be reachable with direct detection experiments. So this is just to give you an example of the kind of thing that you can do with astrophysical objects. There are many other kinds of things that you can do and I don't think that the reach of what astrophysical experiments can constrain has been fully explored so far. Let me give you another example. So I said that originally dark matter with a particle anti-particle asymmetry was considered as a way of solving the solar neutrino problem. And so that's to say that if you collect enough dark matter in the center of an object like the Sun, you actually disrupt its evolution. Now one of the reasons we're not considering this anymore is because we solved the solar neutrino problem with neutrino oscillations. But if you had more modest changes in the center of the Sun, so this is the HR diagram. Typically you'll follow an evolution that goes along the main sequence here. And the point here that they were making is that as I add asymmetric dark matter having a particular density, you'll notice that this is very high. So does anyone know what the local density of dark matter is? Someone uncertain? Anyone? 0.3 GeV per centimeter cubed. Exactly. So here they actually have a boost of this. So this is not something that would have a substantial impact on the Sun. Locally. But if you had another star somewhere else in the galaxy that was closer to a higher density of dark matter, incidentally they also fixed the scattering cross-section. You can see that what happens is you change the evolution of the star itself. So rather than following this main sequence, what happens is as you add more and more and more dark matter to the center of it, eventually you just terminate the evolution completely. And the reason for that is you just change the heat transport properties in the center of the Sun. And it just doesn't burn the way that it would normally. So now I'm actually going to move on to talking about dark matter model dynamics. And in particular as I said last time we talked about the standard-wimped paradigm of supersymmetry. And the reason why I want to do this is not because I just want to build models, although you can certainly build models, making sure that you have theoretical consistency in a model that makes sense is important, but more importantly so that we can design experiments to look for these different kinds of dark matter candidates. So the thing that you worry about, the thing that keeps you up at night, is that we're restricting ourselves to too narrow of a region in the parameter space. And that really what we ought to be doing is covering as many different kinds of dark matter candidates as we can. And as we said before, our usual picture of dark matter, mostly motivated by supersymmetry, is that the dark matter is a single stable, weakly interacting, or as we said last time sub-weekly interacting, massive particle. And supersymmetry and axions do indeed fit that bill. But we've also emphasized that that really are thinking about the dark matter sector has shifted in the sense that now we're trying to cover a much broader class of models. So if you just think about that in terms of what we know about the standard model, it's very complicated. It's not minimal. As a matter of fact, you know, we told ourselves that it's an elegant model. It's beautiful, but it's actually enormously complicated. The dynamics and it is enormously complicated. You have a sort of hodge podge of symmetries that are doing different things that give rise to all the dynamics that we've studied for the last several hundred years. In fact, this has been the program of physics. And this sector over here, we haven't even actually started to probe yet, except through the gravitational interaction. So we don't even know, once we actually start to see what's over here, what exactly it is that we're going to find. And the reason why I think this is important is simply because we don't want to make, we want to make sure that our lamp post is not too narrow. That we're not only looking at a particular class of theories that fits into a very narrow definition of how we think the universe ought to work. And I want to spend just a couple of minutes talking about anomalies. So these have come up in questions. And I think we're at the point now where almost everyone would agree that it seems unlikely that any of these anomalies that have cropped up in the last five years are due to dark matter. However, I do think that there is one useful relic of this whole anomaly-chasing business. And that is that it has forced us to take much more seriously models, new models beyond the standard paradigm. Because the reality was that none of these anomalies that we saw were actually consistent with something that you would get from minimal supersymmetry. I mean, really, we're like zero for 10. None of them are consistent with what you would get from minimal supersymmetry. And so as a result, we've really stretched our thinking. So let me just go through, I'm sure many of you have heard about these anomalies, but let me just go through what these anomalies were and why we had to put in new things. So first of all, Pamela was this experiment that was looking at cosmic ray positrons and electrons. And what they saw in this experiment that was anomalous, which has now been confirmed by both Fermi and AMS, is they saw this rise towards high energy and the ratio of the positron flux in comparison to electron plus positron. Now, this rise is potentially due to dark matter annihilation. You can make a model that would fit it that way. But the problem was, in fact, you can get the Wino could do this. Wino can produce positrons, so you can have dark matter annihilating to W plus W minus. That produces positrons and in fact you can fit this anomaly with Wino annihilation. The problem was that we also have a measurement of antiprotons. And if I produce W's, I'm also going to produce hadronic activity. And we didn't see the corresponding hadronic activity. And so what we needed to do was to have dark matter that annihilated into something that looked leptonic and not hadronic. There were also these exocies in the Dama, Kojint, Kress, CDMS. At some point it seemed like everybody was joining the party. And the thing that was noticeable about this was, first of all, that it was really light masses. And secondly, that the scattering cross sections were all the way up here close to the weak scale. 10 to the minus 39 centimeters squared. You might ask, why not just make that a neutrino? Why wouldn't that be a neutrino? So we said 10 to the minus 39 centimeters squared was a neutrino. Why don't I just make this a neutrino? We talked about it yesterday. You're just pulling my leg. What? Yes, invisible width of the Z. So if I had an active neutrino, okay, the Z would have decayed into these things like crazy. And so that was already ruled out. So I already said this, in fact, okay, so this is the reason why it didn't work. You didn't have any hadronic activity. So this was anti proton. And if you had a, we know dark matter candidate, it would have produced a big excess in the anti protons. This is what I already said. And then I also already said this Z pole constraints meant that it couldn't just come from super symmetry. And there was also just the fact that you had collider constraints more generally on the Higgs sector. It just didn't make it easy to fit all these anomalies. So what people did was they said, okay, I can just be slightly more creative. And you actually didn't even have to be that much more creative. All you had to do was to say, okay, let me take the dark matter particle. It's not super symmetric. And I'm going to add a dark force. I'm not even making it complicated yet. I'm just adding a single dark force. The thing is massive. It means that I need to have a Higgs mechanism in the dark matter sector, but big deal. And in that case, what they said was, well, I can have dark matter annihilating to this dark force. And if I make it lighter than the pion, then it's just not going to go hydronically at all. All I'm going to get out of the deal is leptons. And then they noticed, well, with this light force, there's other things I can do. Now that this light force is sitting there, I can scatter off of electrons and nucleons. And in fact, because this force is light, it gives an absolutely huge scattering cross-section. This light dark force now, the reason why I'm emphasizing this is you don't have to do very much to your dark matter model to already get a huge range of new phenomena. So what else does it for, you do for you? Well, you recall that we said in asymmetric dark matter, I have this one part in 10 to the 10 asymmetry sitting on top of the thermal background. So in the standard model, I have E plus E minus annihilating to photons. Now what this dark force does for you is the analog of what the photon did. I have dark matter and a dark matter annihilating to two dark forces. Two dark forces, maybe they're unstable, they subsequently decay. This is an easy way to get rid of that thermal abundance. I think probably the single most important thing that came out of this is that it led us to think about how you would develop new experiments to look for these things. So all I've done is to take the dark matter sector. I've added some new light particles. The dark matter itself might be light. It can annihilate to these dark forces. And so what are the implications for direct detection experiments? For intensity experiments, okay, I'll tell you in a minute what an intensity experiment is. And then lastly for dark matter self-scattering and halo shapes. Okay, so let's look at that. Adding one dark force. Okay, mediates large scattering cross-sections. I already said that. So if I put in a mediating particle, okay, in this particular formula, it's HGV. Product of these couplings is 10 to the minus four. So the products of the couplings don't even have to be large. And still I get 10 to the minus 40 centimeters squared scattering cross-sections. It leads to a way to set the relic abundance if I'm interested in doing that through dark matter annihilating to these dark photons. And it gives rise to self-scattering. So let me stop there. Are there questions about this setup? I don't think that anyone believes that this is actually the way the world works. Okay, one should think of this as being a toy model for understanding what the range of things are that is theoretically consistent and would give rise to new signals. Yes, so the way I think of this is, you know, you have some model parameter space. I can populate this model parameter space in, you know, a huge number of ways. What is the probability that I or anyone else sitting at their table actually picks out the one model that really does map onto the world? It's very low. So the philosophy that you should take when building models not is, is this interesting in the details because this is the model of the world? No, the question should be, what are the qualitatively new kinds of things that this model teaches me? And what can I learn about it in terms of ways to search for these? So in general, you shouldn't look for ways to tweak models. In general, you should look for broad classes of things and then ask the question, what is it that that thing just taught me generally? That's what I mean when I say, this thing has a very low probability being the way the world works. Just because there's so many different ways that I can actually construct this thing. Okay, a lot of different, I mean, if you didn't know that the standard model was the standard model, I mean, would you set it at your table and write that thing down? I mean, the probability is very low. You need to have experiments to tell you how to do that. And then you sort of, you know, bootstrap. You have one thing you constructed and then you build it from there. Other questions. So one of the important things that we, that I think really got driven home, I think we did appreciate it at some level previously, is just the fact that these low mass sectors are much more efficiently probed at low energy intensity experiments. So we've been as a community so focused for so long on going to higher and higher energies that we didn't stop and think about putting a lot of energy into going to very high intensity beams but at relatively low energies. And the reason for that is simply that once you're above the mass scale of the mediator, this production's cross-section scales like the coupling constant to the fourth power over the energy squared. So the higher you go, the higher an energy you go, once you're above the mass scale of the mediator, the more you actually hurt yourself and the rate. So what you're looking for, if you're looking for a low mass particle is low energy, very intense beams. And you prefer this beam energy to actually sit right on the mass of the mediating particle. And so this is in fact why people devised beam dump experiments. So the idea was that you have some target just sitting there. I shoot a beam of electrons and then I radiate something off. Okay, in this case a dark photon. And then I might be able to have this dark photon decay back to E plus E minus in which case I can look for it in the visible mode. And that's in fact what they did was to say, okay, here are some existing constraints. This turns out to be supernovae and these are other beam dump constraints. Let me try to design other experiments that might cover other regions of this parameter space. So this is the coupling of this field to the electron as a function of this particle mass. But then you realize, okay, if you don't go to E plus E minus, which you may not, you can actually just produce the dark matter directly. And so now you have a way to cross correlate those between different types of experiments. So you have the dark matter mass, this dark photon mass, the coupling to the electron and the coupling over here to the dark matter particle. So again, extremely unlikely that this is actually the model of the universe, but let's just take it as some toy model and see what it tells us. Well, as soon as I have that simplified model, as we said before, I can have a process where the dark matter particle annihilates to this force mediator and itself scatters through this force mediator. And I think as you all know by now, not only can dark matter-self interactions change the shapes of halos. So we've talked about the fact that halos have been observed to be triaxial. And so if you have dark matter-self interactions, they can actually make the halo more round. Another effect of dark matter-self interactions is to look in the place where the dark matter is the highest density, which is to say in the core of the cluster of galaxies or in the galaxy or in the dwarf galaxy. So if you plot the energy density of the dark matter as a function of the distance from the center of the halo, so this is for a Milky Way-type galaxy. So in the Milky Way, we actually sit out here at 8 Kpc. Typical size of the dark matter halo is about 50 kiloparsec. And in the absence of dark matter-self interactions, the dark matter profile follows this characteristic NFW profile. Now as you add in dark matter-self interactions, what that has a tendency to do is to transfer heat from the outside of the halo into the center. So dark matter particle comes in and as it passes the center of the galaxy, on occasion it interacts. And when it does that, it transfers some of its kinetic energy to a particle that's already trapped in the center, in the core. And what that does is it actually puffs out the core just a little bit. So that's what's shown here. So these are in cross-sections, which are centimeter squared per GeV. And we'll come back in a little bit, note that those cross-sections are really big. These are nuclear-sized scattering cross-sections. We're talking about 13 orders of magnitude above weak scattering cross-sections. And you can see that as you increase the scattering cross-section, you transfer more and more heat and you lower the central density of the core of the galaxy progressively. Now it's also been shown that if you do this too much, you actually lead to a catastrophic instability. But at least for some intermediate period, this appears to work quite well. So the things I want you to notice about this is one that I already said. Scattering cross-sections are 10 to the minus 24 centimeter squared per GeV, typically, whereas weak cross-sections are 10 to the minus 39. But this actually fits very well with these new models of dark matter. So if I just plug in, now here I have a 10 MeV force mediator, a dark matter mass on the order of 10 GeV, a self-coupling which is not too big. Now notice this is not the coupling to the baryons, this is the self-coupling. It has to have a much smaller coupling to the baryons. Then you in fact do generate a cross-section which is large enough to impact the center of galaxies and clusters of galaxies. So now you can take what you've learned from the self-interaction constraints and you can map them onto what you might see in a direct detection experiment. So you have some constraint on the mass of this force mediator, on the coupling to the dark matter, and on the coupling of this to E plus E minus from the intensity experiments. So each one of these is constrained in some way from these various different types of observations. And now you can ask based on those expectations, how does that map onto the direct detection, what I might be able to see in a direct detection experiment. So before I do that, are there questions on this? Taking it, yes. Can a symmetric dark matter be a baryonic dark matter? And the white dark matter be a baryonic dark matter? Yeah. I mean if it's charged and it has some baryon number, so is it possible? So it's not visible baryonic number. It's not charged under the visible SU3. I could imagine constructing a dark matter sector where I have a hidden color and I charge the dark matter under that hidden color. But there again you have to be careful because one thing that's different about baryons than the dark matter is when I look at the Milky Way it's a diff or I look at any galaxy. The baryons are in a disc whereas the dark matter is spread all around in a halo. Now why is it that the baryons are in a disc? Well the baryons are in a disc because there's the photon. I can radiate energy through a photon and the way that what happens when I have a way of radiating energy but that still conserves angular momentum as I create a disc. So I can have a similar kind I can have dark matter self-interactions but I had better make sure that whatever those self-interactions are that I don't too frequently lose energy because if I did then all my dark matter would also collapse into a disc. For example be the case that part of my dark matter would collapse into a disc and in fact some people think that there's evidence for a dark matter disc but I still need part of my dark matter to not have strong interactions with an effectively massless force mediator that could cause it to collapse down to a pancake. So what is the upper bound on mass of dark matter from experiments? Asymmetric dark matter. What is the mass of it? Upper bound. Yeah I mean is there any constraint from experiments? So if in one we have a signal for dark matter that would enable us to determine the mass of at least one component of the dark matter we should all celebrate for weeks. Okay we don't have that yet so a natural mass scale for asymmetric dark matter is what I was talking about on Wednesday the several GEV number. That said I can also build models of dark matter where the dark matter mass lies elsewhere and there are various ways that I can do that I'm not going to go into them and certainly give you a reference though of a place to read about it. Okay thank you. Other questions. So the point that I want to make here is that you can use these things that we know you put together the constraints on dark matter self interactions you combine it with what we know from intensity experiments and you can map them onto a direct detection parameter plane. Now what I'm going to show you here at the moment I haven't justified it at all and the reason why I haven't justified it at all is that this mass range of the dark matter goes from one MEV to one GEV and you'll recall what I've been showing you is that the constraints stop at 10 GEV so I'm going to show you in just a little while while people think that you might actually be able to nail part of this parameter space. Yes. Yeah. Yes so the argument about the halo shapes only requires that there be some interaction that randomizes the momenta. It doesn't have to carry off any energy it doesn't have to be a massless mediator it can be a pretty massive force that mediates this and you would still get this effect. All you really need is some type of interaction the thing that causes it to collapse so self-interactions that don't cause you to radiate off energy make the halo more spherical. Self-interactions that would allow you to lose energy are going to cause you to create that pancake shape which is what the baryons do. So you could do yeah so you could do a multi-component dark matter model where one part of the dark matter has some interaction with a massless force and that massless force you can radiate off and it would cause you to collapse into a disc-like structure if it happened at a high enough rate. And another part of the dark matter sector that interacted through a force that you didn't radiate off and you didn't lose energy and that would just cause the halo to become more spherical and the center to become puffed out. So there's a potentially complicated interplay of constraints. If you allow yourself to even step one step past the minimal model which is why I think we need to go piecewise. Right if you just if you you know imagined trying to construct the standard model from scratch with no experimental input life would be hard. Other questions so the point that I wanted to make on this slide is that what you can do is to map these intensity experiments these intensity experiments being these regions here. So these are beam dump experiments. This is a supernova constraint that puts a constraint on the coupling of this force mediator to the electron. I can add together the halo shaped constraints and then all those things together I can actually map onto the direct detection plane. And when I do that I cut out some region of the parameter space now that is what is constrained by these experiments but you can see that I have this other vast region of parameter space that is reachable by an experiment direct detection experiment if I were able to get down to one MEV in dark matter mass and I haven't justified to you yet why that might be possible. Now the thing to keep in your mind is that direct detection experiments you should think of as cosmic intensity experiments. What they do really well is to sit there and integrate. They sit there and integrate and in particular put very strong constraints on things that have light force mediators and the reason for that is the energy and the transfer is pretty low. So if you compare a dark matter direct detection experiment to what you might be able to do at the LHC you should think of these in some ways as being very complementary in the sense that direct detection experiments are low energy but high intensity very high intensity and the LHC is a very high energy experiment but comparatively low intensity. So why is it that I think you might be able to get down to one MEV. Well the reason is that as we said I think it was yesterday the typical energy thresholds of these direct detection experiments is on the order of 5 to 10 keV and the reason why it's on the order of 5 to 10 keV it turns out is nothing absolutely fundamental it's just that's the way they optimize their detectors because that's where theorists told them to go and look. So in particular if you compute the recoil energy which typically goes like the momentum transfer in the scattering over the mass of the nucleus and you put in the momentum transfer typically being on the order of the dark matter mass times the velocity being 10 to the minus 3 what you find for a 100 MEV dark matter is that you end up depositing about an EV of energy if the recoil is off of a nucleus. However note that you're taking a big hit in the recoil energy just from scattering off of a nucleus and the reason for that is that a nucleus to a light dark matter particle is just like a wall. Okay you come in you interact you bounce back if you want to deposit more energy you should actually look for a lighter target and instead if you can actually extract all of the kinetic energy in the dark matter which you will if the electron instead is used as the target then you can get a kinetic energy for the recoiling target so in this case the electron which is much higher in the tens of EV. So the idea then is that you might be able to use atomic ionization or excitation in order to actually extend this reach from 10 GEV all the way down here to an MEV where some of these lighter dark matter models live. Now there have been a few ideas about how this might work so this slide I took from Reuben Essek and the one that is actually most actively being implemented is to use semiconductor targets. So as we talked a little bit about yesterday if you take a germanium or silicon semiconductor these have a typical threshold which is on the order of an EV. So there's some type of a band gap here and the idea is then if you can pull one of these valence electrons off into the conduction band and the excitation energy of that is an EV which means that you'll be able to get dark matter as light as an MEV and your experiment is actually sensitive to energies that low you might actually be able to detect dark matter all the way down to 1 MEV. And they actually tried implementing something related now not in a semiconductor but in a xenon experiment. So the idea in the xenon experiment was now you do again try to pull an electron off of a xenon nucleus. And then when you pull the electron off the xenon nucleus it floats through your detector and then you produce a whole bunch of other electrons. And so the size of the recoil of the dark matter off of the electron is actually proportional to the number of electrons that you produced. And so they went and they did this with xenon 10 data. They were actually able to push from you know so 10G EV is where the constraint was before they were able to push all the way down to 10 MEV. Now you can see the scattering cross sections that they're able to constrain are not very strong. So this is a very young technology still but nevertheless you're able to say something about this parameter space. Are there questions about this? So you've got yourself to 1 MEV. Okay or you started to think about how you might get yourself to 1 MEV. Ultimately we'd like to get all the way down to where we know observationally dark matter could behave like a particle. And you recall that I said yesterday that from the form of the power spectrum on small scales that limit was down around a KEV. So we're talking about now going from tangent GEV all the way to 1 MEV. Now where we'd really like to ultimately get is all the way down to the warm dark matter limit of 1 KEV. So how do you do that? Well 1 KEV dark matter has a recoil energy which is a milli EV. So how are you going to reach milli EV energies? Well we already know that semiconductors have band gaps which are an EV. Okay so that is completely out of play. You simply cannot ionize an electron in a semiconductor. On the other hand superconductors have sensitivity to energies that are as low as a milli EV. And the reason for that is that the similar to semiconductors you have valence electrons but the valence electrons and superconductors are actually paired into these cooper pairs. Okay this is BCS theory. And those cooper pairs all you really need to know about them is that the binding energy is milli EV. Well there are two things you need to know about it. First of all that the binding energy of cooper pairs and a superconductors milli EV. And the second thing you need to know about it is that once the energy in the scattering is above a milli EV you can treat those electrons as just being in a Fermi degenerate metal. So I can really take the distribution of those electrons as being what you know from Fermi theory where you have all of these electrons trapped down here below the Fermi surface which usually sits at a few EV. And then if I have enough energy to pull the electrons out of the Fermi C I will actually break cooper pairs. I will disrupt the superconductor and I can potentially detect those. Now what do I pay? Can anyone guess what I pay? If I'm interested in getting down to KEV dark matter masses what is it that I pay? We're not going to know until we actually start building these detectors whether they have a huge background. Potentially so we can talk about backgrounds in particular you have to get your noise down to really low energy. Millie EV is such low energy that you don't really have to worry about the standard kinds of nuclear backgrounds. Nuclear backgrounds are actually much worse at KEV and higher energies. KEV to MEV. Those are typical nuclear energies. Here you're well below that. So it's something more fundamental that we're paying here. So I said that this Fermi surface resides at several EV and the typical energy in a scattering for one KEV dark matter particles in Millie EV. So the fundamental thing that I pay is polyblocking. If I take a particle down here in the Fermi C, I can't actually scatter it up into another part in the Fermi C. I actually have to pay the full price of pulling it out of the Fermi C which is to say in this case if I pull from deep in the Fermi C I actually pay a full EV of energy to do that. So if I'm really interested in KEV dark matter I actually only have kinematically accessible to me those electrons that are within a Millie EV of the Fermi surface more or less. So I pay a fairly steep kinematic factor for doing that. The basic design concept though of this is that you have a superconducting target. You pull an electron off of out of the Fermi C by depositing the appropriate amount of energy. That thing random walks inside it breaks up other Cooper pairs as it does so and it keeps banging around so when you break a Cooper pair you create a quasi-particle. Quasi-particle is an excitation that just random walks they're actually quite long-lived and they keep random walking until they hit your detector. This is your detector on the surface of the superconductor and then you just absorb that quasi-particle and the heat from that quasi-particle just gets absorbed on something that can detect really small changes in heat. And one thing that can do that is a TES. So TESs are actually used for detecting C and B photons which are also very low energy. And so you can actually just work out what is the reach in dark matter parameter space that I might be able to get for dark matter all the way down to 1 keV. That's what's here. And so here we have some constraint from a superconductor that has a 1 milli-eV threshold and here are various models of dark matter that satisfy all the constraints that you might be able to reach. And you can see that you can get down to a fairly small scattering cross-section with a kilogram year of exposure. So that's all I'm going to say about direct detection of dark matter. So one of the things that I think is going to be happening quite systematically over the next decade is we're going to push through the parameter space where you might expect to see a wimp. So we'll get down to the neutrino background. At the same time, we're going to see people developing technologies to go to lower masses. So first of all, we'll push down to 100 meV then down to 10 meV. And then we'll start working on detector technologies that can go to even lower masses. And the thing that's remarkable about this is just five years ago. Nobody thought that it was possible to even go below 10 keV. So the reason why I'm saying this is is there are many directions to push what we think is being possible right now with new technologies. And you guys are in the perfect place to be able to work on it. All right, so the other question is about these direct detection experiments. If not, I'm going to move on to collider searches for dark matter. So now we're going to go on to high energies. How do you search for dark matter? So now we're going to go back to massive dark matter. So I said before that if your dark matter is light, LHC is likely not the best way to look for it unless you're mediating particle. Whatever is producing the dark matter from the standard model also happens to be heavy. If your mediating particle is light, forget about it at the LHC. It's not the right experiment. If you have some other weak-scale dark matter candidate, then LHC is your machine. And in particular, what LHC does well is to actually produce colored particles. It's not actually such a great dark matter machine. It's a great colored particle factory. On the other hand, you can look through dark matter by producing parent particles which then cascade down to the dark matter particle and produce other stuff. Or you can just produce the dark matter directly and then look for some radiation off the initial state. It could be a jet, could be a photon, could be a Z-Boson. Now one thing that people have been focused on in the last couple of years is to ignore whatever is the mediating particle, essentially. I'm going to say, I don't know anything about whatever interaction it is that happens to be that produces the dark matter. Instead, what I'm going to do is just focus on the dark matter and I'm going to look for that missing energy by looking for the radiation off the initial state. Now what is the reason why this is nice? Well, the reason it's nice is twofold. First of all, it allows me to simplify things a lot in the sense that now all I need to do is to write down operators. So if I take some blob like this, there are various different ways that I can UV complete the blob and whatever could be sitting in the blob could be some type of T-channel, S-channel, U-channel type interaction. But of course, what you're assuming in here is that I can actually ignore whatever is inside the blob. And in particular, I have to be able to integrate out whatever particle is it mediating this interaction. So an example of this, as we talked about on Wednesday in the context of asymmetric dark matter, this is just like Fermi theory. Okay, so low energies at nuclear energies, I'm perfectly justified in ignoring the W and Z bosons and treating that as a for Fermi interaction. Okay, so works fine for nuclear energies around the MEV scale. On the other hand, if I decided to ignore the W and Z boson at LEP, I'm going to mess things up. Okay, I'm going to mess the constraints up or what I imagine the theory to look like. Now the reason why I think people liked the blob approach to doing dark matter at the LHC is that it allows one to make a one-to-one correspondence between dark matter physics at the LHC, direct detection and indirect detection. And the reason for that is that if I take the blob approach to doing dark matter physics okay in one direction, I have standard model particles producing dark matter. Okay, you need some ISR off the initial state. But nevertheless, you have this in one direction. In this direction, I have dark matter scattering off of a nucleon. And in this direction, I have dark matter annihilating the standard model, which is freeze out. So just by this rotating the diagram by 90 degrees, I can make a direct connection between each of these processes. And that's the reason why it seems particularly powerful. And so what it allows you to do is to map this constraint from the LHC onto a direct detection scattering cross-section. There's a one-to-one mapping that one can make in the limit that I can ignore whatever the mediating particle is. I can just treat it as a higher dimension operator. So in particular, doing this for, you know, LHC, I can compare what the LHC reach would be against the Xenon reach. Okay, and you can see that in particular, it appears that direct detection experiments at low masses in particular are quite useless. The lesson to be learned here, though, is that it's really important that you know what the limitations of a theory are. So this is the analog of pretending that you can ignore the WNC Boson at LEP. Notably, you're assuming that whatever the mass of the mediating particle is, it's bigger than whatever the center of mass energy is of the collider. Once this fails to be true, what happens is that your production rate actually dramatically drops. So we said before that once you get above the threshold of a particle, the production cross-section goes like one on the energy squared. And so the result is that the constraint on the particle actually becomes much weaker the lighter the particle is. So here, for example, let's suppose I had, again, a simple stick model. I have proton through Z prime to dark matter. Okay, the Z prime can mediate scattering and it can mediate production at the LHC. And then I can look at what happens at the LHC when I radiate a jet off the initial state. So now you can see that the lighter that I make the Z prime mass, the LHC reach gets weaker and weaker as you go over to lighter and lighter Z prime mass. And this is the point that I was trying to make earlier. LHC is a good machine for massive mediators. It's a bad machine for light mediators. For light mediators, what you really want is an intensity experiment like a direct detection experiment. So you might ask the question, well, if I use the blob approach to doing dark matter physics at the LHC, is there a regime of validity for this at all? And it turns out the answer appears in many cases to be not really. So let's look at that. That's what is shown here. So this top plot shows a constraint on a scale. So this scale is defined to be lambda is whatever the mass of this mediating particle is over G squared, whatever the coupling constant is. Or more strictly speaking, if I allow the coupling to the quarks and to the dark matter to be different, it's this quantity. And so now I look at a monoget search and I plot the constraint on this quantity here as a function of the mediator mass. And what this shows here, this big peak here is the particle being produced on resonance. And then once it starts to flatten out, that's where you're actually entering the regime of the effective field theory. So the reason why I have this here is you can see that you start to enter the regime of the effective field theory when the mediator mass is something like 3G EV. Sorry, 3G EV. So your mediator mass has to be heavier than 3G EV in order for shrinking this interaction to a point to be a reasonable thing to do. So now let's take that one step further and ask what is it that I can actually constrain? So this G sub chi is actually one on lambda squared. And this is what can actually be excluded. So above here is what can actually be excluded by a search, which is just proton proton goes to dark matter, dark matter with a jet radiated off the initial state. And in green, this is what can be constrained by, now you notice if I go to dark matter, I can also go from my proton initial state just directly back to jets, which is to say I have a direct constraint on this mediator just from die jet constraints. Now I take this plus the fact that G sub chi, if I'm going to satisfy perturbativity has to be smaller than 4 pi. Otherwise, my theory just doesn't make sense. Now we put those things together. This is what's excluded by these searches. This is what's excluded by these searches. And then I say, okay, my monogets are useful. They're constraining if my mediator mass is lighter than 1 TeV. And now you should notice a contradiction because in order for my framework to work, this framework to work, I needed to be in the regime of the validity of the effective field theory. Which means that my mediator had to be heavier than 3 TeV. And so now you see these two things are not consistent with each other. So that's telling you you should just not be using the effective field theory to constrain dark matter through this kind of process at the LHC. So are there questions about this reasoning? So fortunately in the last year or so, I think news is propagating that we need to be looking at dark matter at the LHC in terms of simplified model and not in terms of effective field theories. And I wanted to go step by step the reasoning for why this is the case. So it looks like I'm running out of time, so I'm actually going to stop at this and just say that there are other cases that one can look at where one comes to a similar conclusion at the LHC and just summarize. So I started by saying on Wednesday that we have some pretty good ideas of what might be in the dark matter sector. Some of them are extremely well motivated from a top down point of view. Super symmetry and axions are two of them. On the other hand, 30 years of looking for these particles has not yet yielded an observation. In some cases, there are still big regions of parameter space that we need to probe as is the case for axions. On the other hand, what I tried to argue is that if these are the only regions of parameter space that we're focused on, we're missing vast regions of model parameter space. And in fact, what we should be doing is taking a bottom up point of view and actually trying to develop new experiments and new theories to go after as broad a span of different models. And so what we're really doing is developing new ideas for the express purpose of having new search strategy. And this is an opportunity for all of you. We're coming in some sense to kind of the, you know, perhaps LHC in six months or a year will have discovered new particles, in which case that obviously becomes the primary priority. On the other hand, if we don't see anything at the LHC, this is not the end of the search for new physics. We know that dark matter exists. We know that it's not physics beyond the standard model. We've tried a couple of ideas. They were great ideas, but there are other places we can look. And I think fortunately, I've had people ask me before, why am I showing hammers? We have all these different tools. So we're not at the end of the road if the LHC doesn't see anything. Our cosmological probes in many ways are only going to get better over the next decade. And what we need to do is to understand the theory behind these well enough, as well as to develop new techniques that we can approach this for multiple different fronts. So I went a little bit long, but I hope that we still have time for a few questions. Absolutely. Thank you.