 Okay, testing again, can people at the back hear me okay? Okay, very good. I welcome back everyone. I hope you had a chance to get caffeinated in the break. So as the last time, it was great this morning. You asked lots of questions during the lecture. Do that again. Let me know whenever you have questions. All right, so in this afternoon session, what I wanna talk about is some theoretical models for dark matter. We finished the last session with a question about the theoretical basis of dark energy as possible vacuum energy. Now that's a difficult problem, as I'm sure you'll hear more of in the dark energy lectures. For dark matter, the problem is not a shortage of possible theoretical models. Quite the opposite, in fact, as we'll talk about shortly. But before we go on to talking about theoretical prospects for dark matter, I wanna just finish up the end of what I was talking about this morning, the gravitational probes of dark matter properties. So the last thing that I wanted to talk about this morning was self-interacting dark matter and cluster mergers. Now we said earlier that the bullet cluster tells us the dark matter should be approximately collisionless because the mass is not in the same place as the collisional component. But in our simple picture, we might see that the gas is collisional, the stars are collisionless, and we're asking which one is dark matter closer to? Does it behave more like the gas or more like the stars? But as I mentioned briefly this morning, that's not necessarily a yes or no question. The answers don't need to be either the gas or the stars. If dark matter has some small interaction, self-interaction, weaker than the standard model, then we might see a dark matter halo offset from both the stars and the gas using lensing. And the constraints on the scattering cross-section from the bullet cluster are actually pretty close to the constraints, to the cross-sections that you would need to significantly modify the interior of dwarfs. So that says, well maybe if they're really, if self-interacting dark matter is responsible for these small scale differences from CDM, we might be able to see the impacts of that in clusters. Now the difficulties with this, if you wanna do this kind of analysis, you are looking for non-equilibrium systems where the dark matter and the varions can be actually separated from each other. So this is a module where the various components have not yet relaxed into a common gravitational potential. Those systems are not super common. Systems that were close enough that we can get a good look at them and align so that we can get a good view of them and are in that stage are quite rare. Then there's the point, okay, if you've got a complicated colliding system, then getting a good gravitational lens image of that system and then reconstructing the mass distribution in that system can be pretty highly non-trivial. The third worry is, okay, even if you think you can do it, it's not enough just to get a best fit answer to say anything meaningful about self-interaction cross-section, you need to know what your errors are, okay? You need to know your error bars. And it's not really well understood yet what all the systematics and possible backgrounds could be in this analysis, although there's been some work over the last year or so. But just as some examples, it's not always trivial to correctly work out which object a particular lens image is of if you have multiple galaxies in the system. And if you get that wrong, you can really mess up where you think your lens is. There's been some recent studies basically saying that if you oversimplify the dark matter and gas distributions, say you approximate them as spherical blobs and they're not actually spherical blobs, then that can change the constraints quite a bit. I think there's this paper by Robertson that says the nominal constraints on the cross-section from the bullet cluster may be probably considerably too strong than the true values. But on the other hand, this kind of mismodeling could give a fake appearance of an offset and thus lead you to think that dark matter was self-interacting when really it wasn't. So keep all those caveats in mind with what I'm about to show you. Because nonetheless, there was a paper last year by these authors that looked at this galaxy cluster system, Abel 3827, and they found that there did appear to be some offset between one of these merging dark matter halos and its stars. So that's the first key point. The second key point is this. This is basically what I just said in the last page. Given the systematics, it's difficult to definitively say that this isn't some astrophysics, that it is some dark matter physics. But if you did interpret it as dark matter physics, then it would give you not a bound but an estimate for a self-interaction cross-section. Now, this number that they gave here is about two times 10 to the minus four centimeters squared per gram. If you have a good memory or we're taking notes even at the very end, you might remember that I said this morning the constraints in the bullet cluster are about two centimeters squared per gram. So this seems like a really small cross-section. That made a lot of, so, but sending that aside for the moment, let's just ask you about what they saw as opposed to its interpretation in terms of physics. So this is the system that they looked at. It's a system of four elliptical galaxies in a cluster. They think that there were several recent simultaneous mergers. They mapped out the mass distribution using gravitational lensing and the offset that they found is an offset of about 1.6 kiloparsecs. There's a theoretical model that claims to reproduce this behavior in a paper by Sepadal. It involves a self-interacting dark matter component that's about 20% of the total. There are some more images from their paper. So if you look at the system that they were looking at in Hubble, this is how it appears. This plot on the right is the map of the total mass. You can see it's a bit messy. This is what's left after you subtract off a map for a smooth cluster-sized halos. This is sort of just meant to be the additional fluctuations on top of the cluster. And the black points are where the actual galaxies. That, at least to my knowledge, I mean, this is just what they see. This doesn't have a lot of theory of dark matter in it. This is just a statement about what they see based on the gravitational lens reconstruction. Now, a subsequent paper pointed out that when they did the conversion from this offset to the cross-section, what they did was they estimated the drag force on dark matter from self-interaction said, okay, that drag force will slow the subhalos in fall. And so we can look at the difference in accelerations assuming that the dark matter and the stars started off in the same point and use that just using classical mechanics to infer the difference in the distance traveled after a time, depending on the time since the merger. These authors in a later paper, Culver for et al, then pointed out that there's a bit of a problem with this calculation because they're not including the fact that the stars will get pulled along with the dark matter as well as gravitationally. So you need to take that into account too. When you take that into account, they find that actually the cross-section that you need is instead of being two times 10 to the minus 24 centimeter squared per gram, you need a much larger cross-section to separate the dark matter from the stars that is more like two centimeter squared per gram. So that's an example of systematics. You forget this kind of modeling, it can change your cross-section that you think you're looking at by four orders of magnitude. Okay, so that's all I'm going to say about that signal. So given, so you should absolutely keep in mind all the caveats that I've stated, they're said it's still very interesting that these merging cluster systems may be able to actually probe dark matter-self interaction, potentially detect dark matter-self interaction at a level that hasn't been proved before and we might even have a possible first signal of it, the same old cluster. Okay, so to summarize lecture one plus epsilon the last few minutes, the distribution and the gravitational effects of dark matter could be actually a really powerful probe of dark matter properties in interaction. They tell us quite a lot more than just dark matter exists. All that I've so told you so far is essentially independent of any interaction between dark matter and the known particles without ever worrying about those kinds of model dependent interactions. We already have direct observational tests of any dark matter physics that modifies the low end of the matter power spectrum, small scale dark matter halos. Any dark matter physics that as we just talked about produces a drag force or a similar effect on dark matter emerging clusters. Any dark matter that will modify galactic scale halos in a region where you can use stellar robots or gravitational lensing to prove the dark matter distribution. And I said next time here, by which I mean this time, what I haven't talked about much yet is the overall cosmological abundance of dark matter which is also a very important observable that puts significant constraints on particle physics models to explain dark matter. Now in all of these studies, it is important also to understand both systematic uncertainties and guaranteed effects due to ordinary baryonic matter. This is a major research direction and related to that are these possible hints that dark matter may not be entirely collisionless and cold. So hopefully, and this is a field where more data should be coming in over the next few years. So now let me move on from talking about everything that we know about dark matter purely by its gravitational effects and start talking about these other, about first these general properties that I've already mentioned. We need the dark matter to be stable and we need it to have the right relic density. Now what I wanna talk about in the next hour or so is the two main classes of models that get the most attention in the dark matter community. One is weakly interacting massive particles or wimps. The other is axions. Now these are by no means all encompassing. There are many models that don't fall into either of these categories but I'm gonna talk through the cosmology for these models in a bit of detail because it's interesting to illustrate, well first off because many models are perturbations on these two and second because it really illustrates the possible range of possibilities that can give rise to the same observables. Okay, so just to recap, why do we need new physics? We're looking for something beyond the standard model because if we look at the standard model, photons, leptons, hadrons, and W bosons are charged or are light. They shine too brightly to be dark matter. Z and Higgs bosons are neutral but short lived. The rest of our options in the standard model are neutrinos, they're neutral and stable but they're too light. They would be hot dark matter. Now, but once you go beyond that and say all right theorists give me some possibilities to what dark matter could be beyond the standard model, you get a picture like this. This is taken from a talk by Tim Tate as part of the snow mass process in the US in 2013 where we were reviewing high energy physics and where it was going to go next. So this is a approximate scatter plot. It's probably not even an exhaustive plot of all ideas come up with theorists for the true nature of dark matter. Some of these are related to, so some of these ideas are related to fundamental high scale models, physics beyond the standard model, like supersymmetry or extra dimensions, like the strong CP problem, which I'll talk a little bit later, the conditional neutrinos in the neutrino sector. The thing is, so all of these possibilities are at present consistent with the data. They can all give rise to the gravitational effects that we talked about last time. They all look like cold dark matter. They can all generate the right relic density to really disentangle these theories, determine if any one of them is right because at most one of the options on this plot is true and it may be none. Then we're going to need to look for interactions that are not just interaction, that are not just gravitational interactions. Question. This is not a constrained plot. This is a plot of ideas that theorists have come up with which could potentially be the dark matter. For the vast majority of these theories, there are constraints that rule out significant amounts of parameter space. There are also parts of parameter space that are not ruled out. The next two lectures tomorrow and Wednesday, I'm going to talk a fair bit about constraints and searches and what you can do in terms of digging into this space, but this figure as it stands is just to make the point, our problem with dark matter is not that we don't have theoretical ideas for what it could be. We have plenty of them. The problem is understanding how to confront those ideas with experiment and getting experimental insight into which of these, if any, is truly describing the universe. Again, just to reiterate, this huge range of possibilities also spans a huge range of masses. I told you in the last lecture that if dark matter was below a few KV, then it was ruled out because it would be too hot. That's not entirely true. And I'll show you one of the exceptions to that general statement, which is the axion in the next hour or so. This is a plot that spans mass ranges from 10 to the minus 33 GV up to 10 to the 18 GV. I mean, there are some theoretical prejudices to be on one part or another of this plot. There are well-motivated dark matter candidates with masses far below an EV, far lighter than any particle in the standard model except a photon, and there are well-motivated dark matter candidates that are far heavier, that are closer to the plank mass, even if you don't count things like primordial black holes, which could be the mass of the moon. Okay, so what guidelines do we have when given this huge space of theoretical possibilities? What conditions do they have to satisfy? Well, the first thing is that in any such model, we need to have an explanation for why dark matter is stable, especially if we're thinking about a classic dark matter scenario where it's heavy enough to be cold and non-relativistic in the early universe, you would naively think that there are other standard model particles that it could decay into, even if it's light, it could presumably decay into photons. So why should it be stable? So this sets some pretty stringent limits on how dark matter can interact with the standard model. The easiest route and one that features in many particle physics models of dark matter is that there is some symmetry that prevents dark matter from decaying. For example, the simplest example, we'll see kind of this is a new kind of parity, so such that the dark matter has a value under it, such that which forces you to only have two, to have an even number of dark matter particles involved in any given interaction. Parity, the number of dark matter particles has to be even. If you have because a decay of dark matter to the standard model would look like this, where purple blob represents unknown physics. So we don't want that to happen, or if it happens, it has to happen on a time scale longer than the age of the universe, because there is still dark matter, there does still appear to be dark matter around today. So what we wanna do is often forbid or make very unlikely processes like this in favor of processes like this. That's one thing, first any model just has to guarantee stability, which can be a bit unnatural to begin with. Second, we need to get the dark matter abundance right. So I said this in the last talk, Barbara said this in her talks, we have a pretty good measurement of the dark matter abundance from the cosmic microwave background. It's this value here, omega C is the fraction of the critical density attributed to dark matter, and H is this Hubble parameter. So any dark matter model has to give us an explanation for how that density comes about. Now, one sort of general way of classifying dark matter models, there are many, but one is to ask, does this relic density come from a thermal or non-thermal source? So thermal would mean that at some point, the dark matter was in thermal equilibrium with all the other particles, talk to the other particles. Now, if that's the case, then at that time, there would have been about as many dark matter particles. If that happened when the dark matter was highly relativistic, at that time, there should have been about as many dark matter particles as there were photons, okay? It's just, it's in equilibrium with the photons, it's a relativistic particle, they both get populated according to their degrees of freedom, and so they have comparable abundances. Now, in the present day universe, while there's a lot more energy density in dark matter than there is in photons, if dark matter is at a comparable mass to the baryonic matter, for example, then there's about one photon for every 10 to the, sorry, one dark matter particle for about every 10 to the nine or 10 to the 10 photons. The numerical abundance of protons is very, very small. So in that case, in this kind of thermal scenario, what we need to explain is not where the dark matter came from, but where did it go? If it was in thermal equilibrium in the early universe, how did we get rid of all the one part in 10 to the nine or 10 to the 10 of the dark matter particles? Okay, so that's one class of scenarios. Thermal scenarios and the problem is where did it go? The other possibility is that this is non-thermal, somehow, the dark matter was never in thermal equilibrium with the standard model, was often at its own sector somewhere. Then your question is, how was the dark matter produced? Was it produced as some initial condition at the end of inflation? Was it produced in some phase transition? Was it produced by the decays of some particle that was originally in thermal equilibrium with standard model? These are our possibilities. I'm not going to get to existence proofs of all of them in this session. I can happily point you to reviews or papers if you're interested in any in particular, but I will give you one example of the thermal case and one example of the non-thermal case. So let's begin with the thermal case and talk about weekly interacting mass of particles, every physicist's favorite whims. Okay, so the starting point for the whims scenario that I like to start from is actually, where did the relic density come from and the thermal scenario? So let's start with sort of two simple assumptions. Let's suppose the dark matter can annihilate with itself to produce standard model particles so that this process is allowed. Two dark matter particles collide with each other, a miracle occurs, someone known physics occurs, and standard model particles are produced. This could be quarks or leptons or gauge bosons. Now, for searches for dark matter in the present day, we're going to care about this extra step in the process that eventually those standard model particles will decay and form the known and form the stable standard model particles. So photons, neutrinos, protons, any protons, electrons, and positrons. But for these purposes in the early universe, when everything is very hot bath and all time scales are very short, we're mostly just interested in this side of the calculation. So then let's further assume, so we've assumed this annihilation process exists. Now let's further assume that at some point the dark matter was kept in thermal equilibrium with the standard model by this annihilation process and its reverse process. Those are all the assumptions that you need to start from. Don't need to know much more about the particle physics. Okay, so then we have processes, I'm going to use chi to denote the dark matter here. So then we have processes where two chi's form two standard model particles. When the universe is very hot and the temperature is much higher than the dark matter temperature, then when two of these standard model particles collide, they can also likewise produce two dark matter particles through the inverse process. Just like if we collided two very high energy particles together at a collider, today we would be able to produce very heavy objects, including possibly the dark matter. So this is the thermal equilibrium, everything's relativistic, everything's hot, there's plenty of energy to produce everything, including the dark matter. But since we live in an expanding universe, eventually the temperature of the universe will be falling and eventually the temperature of the universe will fall below the dark matter mass. When that, assuming that the dark matter is heavier than at least some of the standard model particles and also heavier than the temperature of the CMB today, it's usually a pretty good assumption. When that happens, this process will still be allowed, dark matter particles will be able to produce standard model particles, but the reverse process will not be possible because there's just not enough energy. It's just kinematically forbidden. Now when this happens, the abundance of the dark matter starts to fall exponentially because it's being depleted and not recovered. So this is a plot of the co-moving dark matter density, so the dark matter density per expanding volume, this is a time coordinate on the x-axis. So you see initially there's this stable abundance of dark matter, then it starts to drop off exponentially. This is just like a Maxwell-Boltzmann distribution. It's got that e to the minus m over t factor. However, eventually the dark matter will be rare enough that the time scale for two dark matter particles to find each other and annihilate is comparable to a Hubble time. It's not possible anymore for this annihilation to maintain the thermal equilibrium with the standard model. At that point, the density of dark matter will approach this some asymptotic value. This is called freezing out. And that freeze out will just be set by the annihilation rate. If the annihilation rate is very high, then the dark matter will stay in equilibrium with the standard model for longer. It will stay on this exponentially falling curve for longer and its late-time plateau will be lower. If the dark matter is poor at annihilating, then this will decouple very early. Annihilation will become inefficient in a literature expansion very early and will end up at this high plateau. So that means that by measuring the value of this late-time plateau, which we can do with the CMV experiments, we can infer the dark matter annihilation cross-section. I'm gonna take you through this calculation in a little more depth in a moment, but I wanna give you the general picture first. So then we find that the cross-section you need, so this is sigma is the cross-section, the rate of annihilation is proportional to sigma times the velocity of the particles. This quantity is given by roughly two to three times 10 to the minus 26 centimeters cubed per second. And if you translate this into particle physicist units and ask what's the natural scale of this cross-section, this is about the cross-section you'd expect to be associated with particles around 100 GeV. This is the so-called WIMP miracle that this argument, which depends only on this thermal equilibrium occurring in the early universe and then our measurement of the dark matter density from Planck happens to give us back a particle visit scale, which is very close to the weak scale, which is close to the scale of the W and the C in the Higgs mass, which we might have thought was an interesting scale for other reasons. Okay, so that's the general picture. Let's go out, let's outline how you actually do this calculation, because you may wanna do it at some point. So what do we need to know? Well, the annihilation rate for identical particles, the rate of number of annihilations per unit time, per unit volume is given by the number density of the particle squared times the annihilation cross-section divided by two. The divided by two here is because they're identical. It's asking how many pairs, if I have n particles and I want to, how many pairs and identical particles, how many possible pairs do I have? 10 times n minus one over two. Okay, so then we have the Boltzmann equation. So this, so in this case, this will describe the evolution of the abundance of this particle in an expanding universe. So this H is the Hubble parameter that Barbara talked about in her last talk and it described, the reason it's there is to match the expansion of the universe. If we send this cross-section to zero, there was no production or destruction of dark matter through annihilation. The number density would still have to change as a function of time because the universe is expanding. The number of dark matter particles here we're treating it as matter, there's number density. So its total number is conserved. Number density goes as one over a cubed. So when we do have an annihilation factor, it will tend to drive this n, this number density towards the equilibrium number density. The equilibrium number density is just given approximately by the Boltzmann distribution. I mean, really it's given by the Thermi-Dirac or Bose-Einstein distribution, but if everything is I was moving, we don't care too much about that. The last ingredient that we need, so what we need to do is solve this differential equation. The last ingredient that we need in getting this differential equation is what we've got some temperature dependence in here. We've got a time dependence over here, so we need to know how the temperature evolves with time. If we're doing this in the epoch of radiation domination, so early in the universe, so now the energy density in the CMB is a tiny, tiny sliver of the overall energy density, but because it scales like one over the scale factor to the fourth, in the early universe, it was very dominant. So if we assume that this happens during radiation domination, then we can write down immediately the relationship between the temperature and the time. So again, this is a case where for a precision solution, if you wanna do a detailed calculation with this, you wanna solve it numerically. You wanna plug that Boltzmann equation into Mathematica or whatever your favorite solver is and just do the calculation. Or if you have a standard model, you can use one of the several tools that are out there to do it correctly, but we can get an estimate of the quantities that are important just analytically. So one criterion for doing this is say, all right, we can say that this frees out, this plateau occurs when the time scale for the expansion of the universe set by the Hubble parameter is close to the time scale for collisions between particles. So that criterion is saying that H, the Hubble parameter, is comparable to one n times sigma b. So if I have a dark matter floating through the universe, it's time scale to hit something else is given by this quantity. So up to frees out, to this frees out point, we can say that the number density tracks the equilibrium number density pretty closely. So what we'll require, so just substituting the equilibrium number density into this equation, we get this expression. We can define a useful parameter, we'll call it x m over t. And we know how H scales with t, so we just showed that on the previous page. So we can then write H at some temperature t is given by this quantity, where this means H evaluated when the temperature is equal to m. And then on the right-hand side, I'm just rewriting this in terms of x rather than t. Okay? Just because x is a nice, simple dimensionless parameter. So this is a transcendental equation. As a first approximation though, you can say that this equation will be satisfied when x is approximately equal to log of this constant piece. So in that case, this gives us an expression for x at frees out, so the ratio of m over t at frees out, which has this behavior. Now it's worth noting at this point, this is a log, okay? So if I pick some set of parameters and I work out what x at frees out is, and then I change the mass, or I change the annihilation cross-section, or I change the Hubble time, x will only change logarithmically because of that. If I change what's in here by e to the 10, then that just corresponds to adding plus 10 to x, okay? So x is not going to be a really, really huge number, a log. Okay, so now, so I mean we could put in numbers for x right now, but let's go a little bit further first and ask all right, what's the abundance at frees out? So then we can say all right, well we know that the abundance at frees out is given by this equilibrium expression. We know because we demanded it that that's closely related to h at frees out. So we can write down this here by xf, I mean x evaluated at that frees out point. The equations we wrote down previously are satisfied. Now that's the abundance of frees out. I told you before that we want sort of one dark matter particle for every 10 to the nine or 10 to the 10 photons if the dark matter's around the same mass. There's the baryons. So we can write down the number of photons frees out. There's a pre-factor here which I didn't write down because you don't need it for a first approximation. The number of photons is scaling like t cubed and that gives us this estimate that the number of these particles divided by the number of photons in a quimving volume is going to scale like this quantity. Now the measurements, when I said, if it's around the mass scale of the baryons, you want one dark matter particle for every 10 to the nine or 10 to the 10 photons, well that has an if in it. What we actually constrain is not the dark matter number density but the dark matter mass density. That's what we have a measurement of from the CMD. That's that omega c parameter. So that's what we can actually constrain. It's like m times n in this figure. So putting that constraint in saying, okay, we want m times n over n gamma to have a particular value and normalizing to the number that I just gave you, that gives us, so forcing that into this relation, we can do one other thing. We can write down what is the Hubble parameter at temperature m? And this is another, this is a common tricking cosmology. The Hubble parameter at a given temperature is approximately that temperature squared divided by the Planck mass. That just comes from, if you write down the expression for the Hubble, it just comes from h squared equals eight pi t rho over three comes from the dependence of g on the Planck mass. So if we plug those into this parameter, then what it tells us in the end is that the sigma v that we need to get this observed abundance of dark matter is going to be about xf times 10 to the minus 10 in units of one over gv squared. Now we still don't know what xf is, but now we can say, okay, well as a first approximate, we know that xf is the log of a bunch of things. So it might be one or it might be 10, but it's probably not gonna be 10 to the 10, okay? Cause it's a log. So for a very first rough approximation, suppose we said xf equal to one, then we would have this cross-section. Now we can plug that result for the cross-section back into our expression for xf. And let's take it, well, okay, let's do another step before that. That's our first estimate for the cross-section. What kind of mass scale would correspond to that cross-section? So if we say the sigma v is about alpha squared over m squared, alpha's about 10 to the minus two, like the fine structure constant and electromagnetism, then with our first estimate for this cross-section, we want this to be about alpha squared over m squared. That corresponds to a natural mass scale of about a TV, about 1,000 gv. This is all very rough, but we don't need it to be very complicated because all we're gonna use this for is to plug it back into our estimate for the x at freeze-out, just to get some rough parameters that seem to give about the right relic density because we can get this wrong by a lot and not change xf very much. So if we put in this first estimate, then, how did I not say it yet? Then we find that xf with these parameters is about 25. And this is a pretty general statement that if you, it's almost independent of mass, just that if we want to, if we want to get the right amount of dark matter at late times, xf should be around 20, 25 kind of regions. That's the ratio. So thermal dark matter freezes out at a temperature that is about 4 to 5% of the dark matter mass. So now we can plug this back into our cross-section estimate on this page. Instead of saying this is one, say it's 25, that gives us another estimate. That estimate, and I've cheated a little here in throwing away numbers which don't matter because this is actually a surprisingly good estimate. We find an estimate of the cross-section of about two times 10 to the minus 26 centimeters cubed per second, which is what I told you previously, that it's about two to three times 10 to the minus 26 centimeters cubed per second. So that's the outline. That's the sketch of how you get that. You can get a pretty good estimate just by saying it occurs when the Hubble time is equal to the time for collisions. If you go through and put in all the prefactors carefully, you'll still get a number that is pretty close to this. Okay, so now, and again, and as I said previously, if you then ask what is the mass scale associated with this number, it's a bit lower than this 1,000 GV number. It's a few hundred GV. But we've hand-waved away enough factors of two here that you shouldn't take that super seriously. But it's around the week scale. Okay, so this is called the Witt miracle because there was no way for a reason that this should give us a number that is right around the week scale. But apparently it does. So that's suggestive of new physics, not very far above the week scale. And indeed, there are many scenarios of physics beyond the standard model around the week scale, which can give you this behavior. In particular, the one that we're going to talk about is the sort of standard law in the field, the standard Susie Witt. However, as we'll see, despite this Witt miracle, sometimes the simplest scenarios do not automatically give you the right cross-section. You have to do some other things. You need some additional ingredient. This is just a picture I pulled off the internet in which the dark matter has many additional ingredients. But probably pretty bad for you. Okay, so let's talk a little bit about, let's talk a little bit about Susie. I realized this is not a particle physics workshop. So this is sort of the minimal introduction to Susie such that you understand why people like it, thinking about Susie particles as dark matter candidates. For those of you who know more about particle physics and have delved deeply into Susie, I do apologize. That's it. First, any questions at this point? Cause nobody has raised their hand yet. And it may just be because it's late afternoon and people didn't get enough coffee, but I figure I should ask. Yeah, okay, yeah, good question. I also left out some g-factors when working out the energy density in photons. Right, so the question was, there was a g-factor. You sort of hand-waved it away at some point. So the number of degrees of freedom is two for a Majorana fermion. It's four for a Dirac fermion. The reason I hand-waved it away is because it appears inside the log and it's a small and it's an order one number. So I don't really care. It's two for a first approximation. There's another g that I hand-waved away which is a little more important, which appears in the energy density in the photon field. You need to know how many effective degrees of freedom have dumped their energy into the CMB at this point. That number is a function of time and it varies from the number of relativistic degrees of freedom around just in the standard model, varies from 106 at very early times down to two, essentially, at late times. So to do this calculation properly, you should not do what I did and just say, oh yeah, the energy density of the number of photons around goes roughly like T cubed. You should put that factor in and you should track its evolution over time. It's always roughly the same order. The number that you get for the relic density doing the calculation carefully for masses from the GV scale up to the multi-TV scale varies between around two times 10 to the minus 26 centimeters cubed per second and three times 10 to the minus 26 centimeters cubed per second. So it's pretty stable. But yeah, to do it properly, you do have to take those factors into account. I can give you, there's a paper from a couple of years ago that all they did in the paper was they did the relic density calculation really carefully and that paper now has a ton of citations just because they were the only people to do the relic density calculation really carefully. Okay, so what you need to know about super symmetry in a small number of slides. So in supersymmetric theories, every particle in the standard model has a superpartner. Fermions in the standard model are matched up with boson superpartners. Bosons in the standard model are matched up with fermionic superpartners. This has motivations entirely outside dark matter. It helps resolve the hierarchy problem. These additional particles provide canceling contributions to what would otherwise be very large pretty good contributions to the Higgs mass. They help with the unification of gauge couplings. It has plenty of motivations and it's just a nice mathematical structure. It has motivations independent of dark matter. And these are some cartoons showing the additional particles that you get. We typically, we denote Susie particles by the same symbol as the standard model particle but with a little tilde on the top. There are many in-depth introductions to Susie around. This is a pretty good place to start if you want to learn more about it. I'm not gonna do it today. So the reason why supersymmetry is such a powerful structure is that if supersymmetry was a true theory of nature and wasn't broken at all, then the particles and the superpartners, well, they would have exactly the same mass and their interactions would be very closely related to each other. So this is an example in the standard model you can have a coupling between the photon and the electron and the positron cut their charged particles. As soon as you go to Susie, you immediately have several other diagrams that are very similar processes but involving the superpartners of the photon or the electron or the positron or both. So the standard model interactions, so Susie inherits from the standard model this huge structure of interactions. Now, we know that supersymmetry isn't a perfect symmetry of nature because if it was, then we would have found all the superpartners already. They would be just as familiar to us as the standard model particles, okay? As it is, you think if they're around, they have to be significantly heavier than the standard model particles. So the supersymmetry has to be broken but we can break it in a way that doesn't severely hinder many of the nice facets of supersymmetry. And if we do that, then the interactions are still largely fixed by the underlying supersymmetric structure. So this is nice. It's, while Susie has a huge number of parameters, it allows you to, in many cases, you can calculate quantities in Susie from knowing things like the masses of the superpartners, yeah? Okay, you're asking, what do I mean by soft Susie breaking? Yeah, so the reason why Softly is in quotation marks here is this is the word that is used to describe this pattern of breaking. If you wanna see the details of what's, I mean, it basically means break it whilst you're retaining some of, whilst you're retaining these structures. But if you wanna see the details of how it's done, I'm gonna refer you to a Susie review because there are many ways to break supersymmetry though. I mean, that's where a lot of the model dependence comes from, just exactly how you choose to break the symmetry and what kind of terms you add to the Lagrangian to do it. Okay, now there are some problems with Susie out of the box because if you take the standard model interactions and you write down all the Susie interactions that you would naively imply from them, you get some interactions that have very strange consequences. For example, this diagram shows a possible diagram for proton decay. So a proton is a charged particle and there's no obvious symmetry that prevents a proton with two quarks in a down quark from decaying into a neutral pion and a positron. If barion and lepton number are conserved then that prevents it but in supersymmetry, barion and lepton number are not naively conserved and there are diagrams like this one. This is the super partner of the strange quark. Now, this process is really not observed. Experimentalists have looked for this process. The lifetime has to be greater than 10 to the 33 years. Recall the age of the universe is about 10 to the 10 years. So if this happens at all, it does not happen fast. So we need some way to forbid this process from occurring. The usual approach is to say, all right, let's impose some symmetry called r-parity and let's make us so the super partners can only couple in pairs to the ordinary particle. So we'll assign to each of these super partners r-parity of minus one to each of the ordinary particles r-parity plus one. When we talk about the symmetry we mean that basically the product of r-parities in a system before and after an interaction has to be conserved. So if I start out with an odd number of super partners so overall parity is minus one power of an odd number I have to end up with an odd number of super partners. Start with one super partner I have to end up with one or three or five super partners. So this prevents, so since that means super partners only appear in pairs this prevents a lot of the this prevents these unfortunate processes from happening. But then by imposing this symmetry we've automatically given ourselves a dark matter candidate because the lightest particle without r-parity it just can't decay. Start with one I can't I start with one such particle I can't end with zero. Can't produce other particles with r-parity minus one because that's kinematically disallowed they're all heavier. So by demanding the proton not decay we have generated generically a stable dark matter candidate in supersymmetry. So then you might ask okay but does this satisfy other requirements for dark matter we've got stability can we get the relative density right and can we get the level of self interactions right? So the first question is well is this even neutral? I mean nothing I've said so far says that it has to be electrically neutral. Now as I said the model of supersymmetry breaking matters and Susie models in general have many parameters. For a given model though once we know how Susie is broken we can compute the spectrum of these super partners and we can say okay which of these is lightest? This is from a paper by 2003 which is showing as I scan one of the parameters of the Susie theory these are these different lines correspond to different particles in the theory and the y-axis here corresponds to their mass. So you can see as I tune around these parameter different particles trade off their role of the lightest particle. So to ask is this viable in Susie first question we need to ask is just are there parts of parameter space where the lightest particle is neutral? And the answer is yes there are. So then so okay where do we get these neutral super partners from? Well often what we're looking at is the neutrilino. This is the ubiquitous but the neutrilino is the super partner of the neutral gauge bosons. So I mean you can sort of think so the super partners of the Higgs bosons and of the gauge bosons that become the Z boson and the photon in the standard model. And we call these the Higgszino, the Wino and the Bino depending on what they're the super partner of. But in general each of these individual particles they don't have to have, they don't need to be a mass eigenstate. They don't need to be a definite mass. The physical states, the definite mass states that you would reduce in an interaction or your collider is in general some superposition of these, some mixture of these states. And whichever of these is the lowest mass determines how that dark matter candidate will interact. So now I'm just gonna show you an example which is not, it's not ubiquitous. There are ways to get around it but it's a demonstration of why the, of an example of how the Wint Miracle doesn't always work. The way that you would think it to. So okay so suppose I'm thinking about super partners, few hundred TV, I'm gonna ignore the LHC constraints for the moment, I'm gonna pretend you know Susie at a hundred TV is great and fantastic and is gonna solve all our problems. This is not the case post LHC run one but imagine for the moment it is. Even then I don't generically get the relic density right. And the reason that I don't generically get the relic density right is the, is a subtlety that most commonly this lighter supersymmetric particle is mostly what's called a bino, has small wino and higzino components. This means it doesn't usually couple super well to the Higgs or to the W bosons. And it turns out that for these bino's their main annihilation channels are actually parametrically suppressed. They don't have this one over, they don't not just like alpha squared over M bino squared. They have an additional suppression which they prefer to annihilate into fermions but the annihilation is suppressed by the mass of the fermion squared divided by the mass of the dark matter squared. So if you're going to something like an electron with a TV dark matter particle that's a pretty big suppression. Even if you're going to something like a top it can be a significant suppression. So because that generic annihilation cross section has this additional suppression that means that these dark matter particles don't annihilate well. There tends to typically be too much dark matter left over in the universe in these models. So that's excluded. Like even if you had some other source of dark matter I mean you still have to figure out how to get rid of all this dark matter. So the Wint miracle so if you want to take one message away from this slide it's the Wint miracle doesn't always help. You'll sometimes hear however there are still regions in Susie where it holds and is fine. In the sort of classic simplified Susie model called M-Sukra you'll sometimes I mention this because theorists will often talk about it as ways to get dark matter to work in Susie. There's sort of four standard ways to fix this problem which go by the names bulk region, focus point, funnel region and co-annihilation region. And I think I understand two of these four names I don't know where the others come from. So the first possibility is basically if you make everything light enough then everything works because the mass suppression isn't very large. The second possibility is well I said it was common for most of this neutrino to be bino but that's not ubiquitous you can get cases where it's mostly we know or Higgs-ino and then it's okay. There's the funnel region where you say all right well if my dark matter is really close to half the mass of the Higgs boson or the second Higgs boson because in Susie there are two then that gives me a large annihilation cross section through the Higgs boson so then we're good. And the last one which is kind of interesting for cosmology is that it may be the dark matter is not the only particle involved in freeze out that there are other particles close to the mass of the dark matter and that when you write down that Boltzmann equation you need to include not just one differential equation but coupled equations for multiple interacting species only one of which eventually becomes the dark matter. So in this way Susie sort of provides us with an example of complications that may arise even in these sort of relatively simple generic dark matter models. I'll just show a couple of plots to illustrate this point these are taken from a talk by Tim Tate for which I thank him. So this is showing two parameters which are essentially the mass of the neutral particle the mass of the scalars and of the fermions in this emsugre regime. These blue and red and yellow regions are ruled out by getting the wrong density or just by not being theoretically viable. The green regions are the areas of parameter space that can actually give you the right relic density in this framework. This is for one particular value of another Susie parameter called tan beta which describes the Higgs sector. This is for a different value of that parameter again. So there are places where you get the right relic density but it's not like 100% of this plot is green. So even if I take a model which naturally gives me stability on which I would expect to naturally give me cross sections around the right value actually in detail working out does this high scale model give you the right relic density? It's a really non-trivial constraint. Okay, so that's mostly what I wanna say about WIMPs at this point. We've talked about how they freeze out. We've talked about how their relic density is set. We've talked about how they're something of how they're implemented in a model like Susie. In my next two lectures I'm going to talk a lot more about how we might look for WIMPs and related dark matter candidates. One of the reasons WIMPs are such a popular dark matter candidate beyond the Susie connection and then we'll freeze out is that they have lots and lots of observable signatures. WIMPs are great if you're looking for an observable dark matter signature. There's searching for WIMPs through the annihilation process that we already talked about. Two dark matter particles collide, produce standard model particles. You can look for those standard model particles with telescopes. You can flip this diagram on its side and say all right if a dark matter particle bounces off the standard model particle let's look for the effect. Or you can flip it around completely and say well just as in the early universe when standard model particles collided we could make dark matter particles. Can we do that at colliders? Now, so I'll talk about all of these in the next couple of talks. Okay, last part of my talk I'm gonna talk about axions but first are there questions about thermal freeze out, the general WIMP picture. If you have detailed questions about Susie I'm gonna ask you to defer them. Sorry, can you just say again? Yeah. In the case, no, in the case of we know, in the case, so in the case of be no dark matter there's this mass suppression. We know dark matter has a non-suppressed annihilation channel into the gauge bosons. Yes. Right, so right. Okay, this is a really good point. Yeah, so when I said it's suppressed there's an alternative to get away from that suppression. Instead of being suppressed by the fermion mass squid you can be in, you can have it, you have a P wave contribution, which means that instead you're suppressed by the velocity squared. Yeah, so, right, okay. But so there's also, how does this work? So actually, yeah, so in this case, yeah, so a P wave suppression isn't that bad at freeze out. It's terrible for detection in the present day because the V in the present day is about 10 to the minus three. But at freeze out, well we just said the temperature at freeze out is about a 20th of the dark matter mass. So a suppression that goes like V squared is again like 1 10th or 1 20th. So I mean it's still substantial. It's still a factor of 10 or 20. I think that in this case, I could be wrong so don't quote me on this, but I think in this case what happens is that the P wave is actually also suppressed at freeze out because the, basically because this particular annihilation, it wants to have the spins of the particles aligned. It's unsuppressed when the spins of the particles are aligned. But if the spins of the particles are aligned and the identical fermions, then they have to be in a, then they have to be in a, no that goes the wrong, yeah. So spins of the particle aligned is a, so, yeah no, okay, okay, let me not say that because I think what I was going to say is wrong by some factor, but my memory is that there's actually, this annihilation is unsuppressed for a particular spin configuration, but that spin configuration requires an orbital angular momentum configuration, which, if the fermions are identical, which may kill the P wave piece, but I'm not sure. So we can look at this later. Okay. Okay, so then let's talk about axions which provide an example of a completely different kind of dark matter. Let's first begin with the motivation. Again, this is going to be the strong CP problem in one or two slides, and I'm not going to go into the details because this isn't primarily a particle theory workshop. So again, I can point you to references if you are interested in this, or if you Google, there are some in these slides or if you Google strong CP problem, Michael Dying Lectures, you'll also get a good reference. Okay, so those of you who don't do particle theory for free to sort of sit back through this slide, I'll tell you what you need to know. So, all right. Essentially the problem is that there's a term that describe, there's this term in the Standard Model Lagrangian, which in principle should exist. There's no reason for it not to exist. There are other contributions that we know exist in the Standard Model which could map onto this term under field redefinitions, but if this term is there, this is called the CP violating term, this is the glow on field strength, this theta here is just some parameter that controls the size of this term. If this term is here, then it would induce an electric dipole moment for the neutron. The size of this electric dipole moment would be pretty small. I mean, I have, okay, there should be a theta factor in there, that's just a typo. So, there should be a neutron electric dipole moment which is this quantity times this parameter theta. However, experimentally we put a constraint on the neutron electric dipole moment and said that it is smaller than this value, which you will note is about 10 orders of magnitude smaller than this benchmark value. So that tells you that this theta parameter has to be less than about 10 to the minus 10. It's just a dimensionless parameter, naively you might expect it to be an order one number. Experimentally it appears to be 10 to the minus 10 or smaller. So the strong CP problem is simply the question of why is this value so tiny? One proposal for fixing this, there are several proposals for fixing this problem, but the one that is perhaps most popular is what's called the axion proposal. So here what we do is, all right, instead of calling this just a fixed parameter theta, let's replace it by a dynamical field. I'm going to commit a terrible sin here and use A for axion to describe something that is not the scaling factor of the universe. So this A means axion, not scale factor always. So, and we're going to say, and we'd like to, well, and there'll be some coupling associated with this field where this is now the number, although it has units, it's units of, as units of mass. Okay, so we're going to still have this term, but now it's going to be some axion field coupled to this operator times a coupling one over FA. So, okay, what did we gain by doing this? Well, now this axion field is dynamical. So now if we want its value to be really, really, really small, we just need to come up with a dynamical explanation for why this field would evolve towards a very small value. Now, if we say, okay, I've got some dynamical quantity that's evolving towards a very small value, then, well, we might say, well, is the point that it's evolving toward is at a minimum of the energy of the system. How does it maximize entropy in some way? And it turns out that indeed, the energy stored in this axion field depends on the value of A, and that potential energy changes as the field evolves. And the form of this potential, which I'm not going to work out here, but you can look at, if you're interested in QCD and the strong CP problem, this is a pretty good reference, this is dynastasi lectures. This potential has this form. Here, f pi is the pi on decay constant, m pi is the pi on mass. And so this is a periodic potential, and it has minima as A approaches zero or two m pi times f A. So this is what this potential looks like around its minimum. So the field should evolve towards small values of this potential. So around its minimum, it's parabolic. We can just expand the cos function as one plus a half x squared. And that gives us this expression. Now, from this, we can read off the axion mass, because the mass here is just given by the behavior of the potential close to its minimum, and we get this result. So you may ask why I put in this particular value for f A, but all I want you to take away from this is basically m A is like, so m A times f A is about f pi times m pi, the pi on coupling factor and the pi on mass are each about 100 MeV, so m A times f A has to be about 100 MeV squared. This is what's true for the QCD axion. Okay, so what does this factor f A mean? Well, one over f A controls the coupling of the axion to essentially everything else in the standard model. There are a couple of specific models for axions. They write down the details of these couplings. All you need to know about it for these purposes is that all of these couplings are going to scale like one over f A. So we make f A, if we make f A really, really large, in this example, it's 10 to the 10 GeV, then the axion will be very light. In this example, it's 0.6 MeV and the axion will be very weakly coupled to the standard model, so it'll be difficult to detect. Remember, this is what we want for a dark matter candidate. We do not want it to talk efficiently to the standard model. Then you might say, oh, but hang on. Okay, so we want f A to be really, really high so that we have a weak coupling to the standard model. But then that implies that the axion mass is extremely light. If I'm talking about MeV axions, then surely this is hot dark matter. Surely it's relativistic throughout the whole age of the universe and even today. So this just seems like a non-starter. So it's not an non-starter for a couple of reasons. So, well, first, well, there are a couple of things that stop axionals from just producing way, way too much hot dark matter. One is the question, well, were these axions ever in thermal equilibrium to begin with? Did they ever equilibrate with the CMV? Did you ever produce a lot of hot axions? So that's one question. The other question is axions can decay, and axion can decay into two photons. There's no symmetry keeping them stable here. So if you want them to be dark matter, you need to check that their lifetime is much greater than the age of the universe. If you don't need them to be dark matter, you just need to stop them from screwing everything up by being large quantities of hot dark matter. Then if they have a short enough lifetime, then that will handle it for you. So to study the thermal axion in detail, we need to solve the Boltzmann equation, including with initial conditions of the axions who are not in thermal equilibrium and see when they equilibrate, and also including their decay. I've already done sort of one example of approximately solving the Boltzmann equation, so I'll just skip straight to the results. It turns out that the time scale for axion decay into photons is of the order of 10 to the 24 seconds for an EV scale axion. And as the axion mass gets heavier, it gets faster very quickly. There's five powers of the mass. So if we want the axions to still be around today, then they have to be lighter than about 20 EV. Unless there's something special about your particular model that makes this time scale much longer. So we want the axions to be dark matter that already tells us, right, we can't push up into this strongly coupled regime. We need to be below 20 EV. Now, so side note here that if you said, okay, well, I don't care about it being the dark matter. I want it to go away. Don't want it to contribute anything. If the axion mass is between about 20 EV and 300 K EV, then its decay would produce so many photons that you would mess up the process of nuclear synthesis, the production of the light elements. I think you'll be hearing more about nuclear synthesis over the next days or week. Okay, so that's one answer. What do we need to do to get the axion to not decay by the present day? We need it to be lighter than about 20 EV. Now, solving the Boltzmann equation, you find that the axions could indeed be in thermal equilibrium if the axion mass is between about 10 to the minus three or 10 to the minus two EV and higher. Below that mass scale, they never attain thermal equilibrium with the standard model. But in this case, in this range between about 10 to the minus three EV and 20 EV, we find that the density in axions is of order. The axion mass divided by about 100 EV. So okay, so these are axions as hot dark matter. They're not gonna make cold dark matter for us. What do we need to do to stop them from messing up the universe? Turns out that if the axion mass, we've had a constraint on axion mass before from looking at the matter power spectrum, provided that the axion mass is less than about one EV, it's a small fraction of the total dark matter and it's okay. It's a small fraction of the total dark matter. Or if it's more than about 300 K EV, just a case so quickly that it doesn't do anything. Okay, but this doesn't solve our problem at all of can we ever get axions to be cold dark matter? And so what happens here is that it's possible for axions to be cold dark matter. But in this case, we're talking about extremely light non-thermal axions. And here it's actually not appropriate when these axions are so light to think of them as individual particles at all. Instead, we should think of them as forming a condensate which behaves like a classical scale of field. And that classical scale of field will evolve within the axion potential that we wrote down earlier. So we drew this potential earlier, the stable minimum energy configuration of this potential. The axion, the value of the axion field would be at zero. Classical value, there would be small oscillations about that which were the individual axion particles. But more generally, there's no reason for the axion to start out at this point in the potential. This is a so-called misalignment mechanism. If when this potential is initially created, then the axion is at some arbitrary value, it's in general not going to be right at the bottom, it's going to be separated from the minimum by some misalignment angle. Now in this case, just as you would have if you had a ball part way up a hill or anything else, not the minimum potential, this configuration has stored potential energy. And the way that this field will evolve is that the axion will slowly, I mean you can see it just by the analogy of a ball and a hill, the axion will roll towards the minimum and then it will oscillate back and forwards in this minimum. And those oscillations will hold energy just like a pendulum has its combination of kinetic and potential energy. Okay, so suppose we wanna do this calculation. Yeah? Yes, so yeah, so yeah, pretty much. So you can write down the Klein-Gordon equation with this potential. You need to be a little bit careful because it's evolving in an expanding universe. So you need to write down the Klein-Gordon equation in curve space time. But that is in fact exactly what you do. When you do that, you end up with an equation of motion for the scalar field that looks like this if you would expand around the minimum of the potential. So this ma squared times a piece is just the potential term. Yeah? Yeah, well so the question about like where do you get this potential from in the first place and how is it formed and why isn't it just flat? In the initial models that were written down for the axion, the idea was you have some Petchy-Quinn symmetry which is broken at some scale and that gives rise to this potential, these minima. And the axion starts out essentially as like the Goldstrom boson of this. Yeah, so I'm not gonna talk much about the Petchy-Quinn symmetry. Okay, so I'm focusing on the QCD axion here because it's the most well-motivated because of the strong CP problem. But I mean more generally, string theory models often predict light axions that are produced by some symmetry breaking or other mechanisms. The Petchy-Quinn mechanism was specifically thinking about the QCD axion, but many theories have axion-like particles in them. Okay, so if we have something like this, some light scalar field still called an axion for now and we wanna know how it evolves in an expanding universe then we need to solve an equation like this. Second order is it's just a harmonic oscillator equation but with an additional term which describes the expansion of the universe, says this behaves like a friction term. So no one is Hubble friction. So when this term is large compared to this term, then there's a solution of this equation which is just A is constant, okay? So the field to a first approximation doesn't involve. The friction term is large enough to keep our ball sitting on its initial position on our hill. But once the Hubble, once H becomes small, and then H is an inverse time scale relative to the axion mass in the field begins to oscillate inside the potential damped by this friction term. So if a large T we can write that we can guess an approximate form for this solution that it's going to be like an oscillatory term, a cos term multiplied by some slowly varying with time function which you'll call F of T multiplied by some initial condition. So this is our initial misalignment angle describes how far away we were from the bottom of the potential. Now you could do this exercise to yourself. It's not very hard. I mean it's a pretty simple differential equation. But if we solve for F, so to solve for F of T we need to know how H scales with T. This is different depending on whether we're in a radiation or matter dominant epoch so you need to do the same calculations for both of them. But in both cases you end up finding that F of T scales like one over the scale factor to the three halves. So what I would call A if I hadn't already used that term for axion is what I mean by scale factor. The energy density stored in this axion field so if you average over many oscillations this cos term squared will just give you a half. So the energy density stored in the axion field scales like theta naught squared times F of T squared which means that it's proportional to the initial misalignment angle squared but then it falls off as one over the scale factor cubed. Yeah, so the question was what is the action that leads to this equation? So it's the action for a scalar field in the FRW metric. So I mean it has a standard kinetic term, it has a potential term that comes from the cos potential that I wrote down before. I'm expanding that potential about its minimum which is why instead of having a V of A term I just have an MA squared term there. Okay, so it turns out, so the remarkable thing here is it turns out that the energy density stored in this field scales like one over the scale factor cubed. So despite the fact that we're not thinking of these as individual particles, we're thinking of these axions as modes in a classical scalar field, the energy density stored in them falls off just the way that you would expect from cold non-relativistic matter. Just scales like one over volume, energy density scales like one over volume. So that means that for the purposes of cosmology it can act like cold dark matter. So contribution to the matter density of the universe. Now, to do this calculation more carefully again, you need to include the fact that the axion mass actually has some temperature dependence. It's not just a constant. You need to include what happens when you go through the QCV phase transition because I've been talking here about pion masses and pion coupling constants and these only actually make sense after the QCV phase transition. So you have to do a little bit more work. But when you do that little more work, what you find is that the fraction of these cold axions in the critical density is comparable to the total DM density when this quantity is equal to one. So here this theta n orders, this initial misalignment angle again, FA is the axion coupling. And again, higher means weaker couplings. So if we have an order one misalignment angle, then this axion can make up all the dark matter when FA is around a few times 10 to the 11 GB. So this is a very high scale. Now if we, now you could say, now we could have, this isn't a straight, now this gives us a prediction for the axion mass which is about 0.1 milli EV. So about 10 to the minus four EV, it's far lighter than any of the particles in the standard model barring the massless ones. Now, you might say, okay, well this is a straight up prediction. I mean, say the theta naught squared is of order one, then that just gives you a number. If you want it to be all the dark matter, this is your axion mass, this is your FA. For an axion like particle, this tight coupling between MA and FA can be broken. So it's not quite as predictive, but this looks like a straight up prediction. But of course, you could always say, well, that maybe this misalignment angle isn't close to one, maybe it's very small. You can't make it much larger than one or you'll just move into another minimum of your periodic potential, but we could make it much smaller than one. There's no problem with that. If that was the case, to get the dark matter abundance right, you would need the axion to be even lighter because that would make FA higher so that you'd have a higher relic abundance before multiplying by theta naught squared. Okay, so called axion dark matter is a possibility it works. It requires that the dark matter mass the dark matter mass be somewhere in the range of 10 to the minus four EV or lighter. This is a completely different end of the scale to the wind models that we talked about earlier. Now, I'll just say briefly, when we ask about what is the sensible value for this misalignment angle? One thing that we probably care about is when is this misalignment angle determined? I mean, we've treated it as an initial condition, but is that initial condition set before inflation or after inflation? If it's set after inflation, then you'd expect it to have many different values in disconnected causal patches of the cosmos. So when we look at the total cosmological density today, the average cosmological density would be averaging over many samples. If that's the case, well, you should probably just be getting an average angle of the RMS value of the misalignment angle over integrated over all possible values. So you'd expect an order one theta naught in that case. But if it's set before inflation, then our whole Hubble volume could have been in a patch with the same misalignment angle before inflation. So in that case, it might be quite natural for us to be in a Hubble volume, which happened to have a very small value of the misalignment angle. Because if we're in a Hubble volume with a much larger value of the misalignment angle, there would be way, way, way more dark matter. And that might have led to difficulties with the evolution that our universe would not have evolved in the same way. So you can possibly make an anthropic argument in that direction. This plot's essentially showing this, so on this right-hand side of this yellow line. So this is plotting the scale of inflation. So if the scale of inflation is very high, this is a case where the axion potential is set after inflation. And then just this blue line for FA gives the value of FA that you would need. On this side of this yellow region where the inflation scale is low and the axion potential is set well before inflation, then each of these lines could give you the right dark matter density just depending on the value of theta. The other thing about this plot is that this big yellow region in the middle is where axions are ruled out by us a curvature fluctuations. I haven't talked about them much yet, but I'll talk about them these more in the next couple of lectures. There are a number of constraints on axions, yeah? ADMX is an experiment. I'm going at the axion-dark-something experiment. But it's probably axion-dark-matter experiment, yeah. The question was, does the misalignment angle depend on the potential? So not, I mean, so I mean you should view the misalignment angle as an initial condition. Really, okay, I mean it may depend on exactly how that potential gets generated in the early universe. But basically it's just an initial condition. Okay, so I said earlier that searching for WIMPs was great because there are so many ways to do it. Searching for axions is somewhat harder. Their main observable property is that they couple to the photons with an interaction strength that's given by the dot product of the electric field and the magnetic field. That basically means that our way to look for axions is look at photons in a magnetic field and see if we can see signs of them converting into axions. And we can do this both in astrophysical systems or in systems on Earth. Okay, so it's getting on in time. I'm sure it's the end of the day. So I'll just summarize there. I've talked about the basic properties and cosmology of these two major categories of WIMP models. These are, the reason I focused on these two is while there are other scenarios, these are the two scenarios that will come up over and over again, the many groups are looking at perturbations on and they demonstrate very different mass scales, very different ways of getting the correct relic density and very different detection methods and experimental searches as I will discuss in more depth tomorrow and Wednesday. So thank you very much.