 Welcome to lecture three. So in the previous lectures, we laid out a way of categorizing dark matter by its mass scales. We said at the end of last lecture that there are sort of two broad categories of dark matter scenarios that we can think about, or at least two. So dark matter can, in principle, be as light as about 10 to the minus 21 electron volts, or as heavy on the particle side as the Planck scale, or maybe heavier, although once you get into macroscopic objects, there are eventually constraints from gravitational lensing, which I've only mentioned briefly. On the particle side of dark matter, one broad classification is to separate into, on one hand, light-cold bosonic dark matter, where you should really think of the dark matter as like a coherent field, or a wave, or a condensate, rather than individual particles. Now, this kind of dark matter is all that you can have. Once your mass scale goes below about a keV, lighter than a keV, the dark matter can't be a fermion. If it's at a comparable temperature to the standard model, then it violates constraints on light degrees of freedom in the early universe and is too fast moving the structure formation to go right. So you need to have all of these properties. The classic example of this kind of dark matter is the QCD axion, which I think you're going to hear a bit more about in other lectures today as well. On the other hand, once you go much above the MeV scale, then your options open up a lot. So for this kind of heavy MeV plus, so comparable to the particle masses that we know of in the standard model that are not the neutrinos and the photons, the dark matter could be fermionic or bosonic. It could be thermal or it could be non-thermal. It could have a temperature comparable to the standard model bath without being excluded. There are many possibilities in this range, but one that's particularly predictive or a class of scenarios that are particularly predictive is where the interactions of the dark matter with the standard model fix its abundance in the present day. And that's the classic thermal-weekly interacting massive particle scenario, so the classic example of this category, what we call WIMS. So today, I just want to run through, in these two cases, in these two example cases, what are two the mechanisms by which you get the observed abundance of dark matter, by which we get this omega dm of about 0.25 that we talked about last time. OK, so I want to begin with the thermal WIMS. This is a, so here, I'm again just going to do one of these order of magnitude estimates, ignoring order one factors. If you want to get all the order one factors really correct, there's a paper by Stigman et al. I think from 2012 or 2013, which goes through this Relic Density Calculation in extreme detail. So 2013, a detailed count. So OK, so what I want to, so for, so by WIMS here, the very classic WIMP is a weakly interacting massive particle in the sense that it interacts through the standard model weak interaction, so through the W and Z bosons. But in recent years, there's been a lot of interest in a broader scenario of thermal dark matter. We still have the properties that the interactions of the dark matter with the standard model fix its abundance, but the mediators of those interactions could be new particles. They don't need to be the W and Z. So this outline that I'm going to show you, I'm not going to make any assumptions initially about what the mass of the dark matter is or how it's interacting. I'm not going to specify that the force carrier is the W or the Z. I'm just going to say, OK, let's suppose the assumptions. This is not applied to all dark matter candidates, but to a set. So the assumptions are in the early universe. So our assumptions are the DM can annihilate into standard model particles. For this first pass, what I'm going to assume is the dominant annihilation channel is of the form two DM particles collide with each other and form a purely standard model final state. I'm not going to care how many particles are present in that standard model final state. Now we can do perturbations on this. There are cases where you could say maybe there's both dark matter and anti dark matter in the early universe, and we need both in order to annihilate. If that happens, then in the present day, there might or might not still be anti dark matter left around, and that would change the annihilation signal in the present day. So that's one modification that you could do. There are scenarios in which you need three dark matter particles in order for this annihilation to occur, either due to a symmetry or due to kinematics. That will also change the calculation that I show you. Hopefully you'll be able to see how it would change. It will modify the parameter space that you get back. Again, as least as the first pass, I'm going to assume that it's two body. That's usually a good assumption just because for this relatively heavy dark matter, the number density is pretty small. So if you have three of four body processes, they're generally quite suppressed. OK. So there are also variations on this assumes it's an annihilation interaction that sets the abundance. So this is simple to understand intuitively. In this picture, what we'll see, there was a lot of dark matter in the early universe. Most of it annihilated away through this interaction. The amount that's left is the dark matter that we see today. The strength of this annihilation just sets the late time abundance. But you can also have somewhat more non-trivial scenarios where it's actually the elastic scattering of dark matter of standard model particles, which is related to the kinetic decoupling calculation that we did yesterday that sets the abundance. So I'm happy to provide references to those ideas if people are interested. But first pass, we're going to start from this assumption. We're also going to assume that that interaction was strong enough in the early universe when densities were high, lots of standard model particles were highly relativistic, that the reverse of this process was sufficient to bring the dark, that this process and its reverse was sufficient to bring the dark matter into full thermal equilibrium with the standard model. If this statement isn't true, if you've got very weak processes that nonetheless allow the dark matter to be populated from standard model states, the process I'm going to tell you about is called freeze out. But you can also have freeze in, where there's very little dark matter to start with and a small amount of it gets produced through interactions like the inverse of this process. And so in that case, you can build up the dark matter abundance rather than depleting it. But we'll do the depletion case fast. So let's think about how the abundance of dark matter evolves in an expanding universe. So first let's look at with no annihilation, given a certain amount of DM. We know how the number density of dark matter needs to evolve with time. We live in an expanding universe. So it's not just that the number density doesn't change, but the total number of dark matter particles shouldn't change. So we can write down an evolution equation that looks like this here. A is the scale factor. So this is just saying that the number density of dark matter just dilutes as A to the minus 3. So just the matter equation of state that we wrote down earlier. We can expand this expression out. So here, this is the number density. So we can expand this expression out. We need to take into account that the scale factor is changing over time as well. If we divide both of these sides out by A cubed, then we get this result. So we can write this as DM by dt plus 3HN equals 0. So with no annihilation, this is the evolution equation. So this term describes the depletion of the abundance just due to the expansion of the universe. Now if we turn on a process like this, then what we'll see is there's an extra contribution to DN by dt. So what we'll see, instead of this being equal to 0, we're going to have a term that looks like this. So where does this term come from? So this is looking at the rate of depletion of dark matter particles as determined by a cross section. It has two powers of the dark matter density because we need two dark matter particles to annihilate. If this was three body or higher, there would be more factors of N here. There's a half factor here that's just a combinatoric factor. There are two identical particles in the initial state, according to this assumption. If we were doing dark matter and anti-dark matter, it would be N1 times N2. But if you ask me how many pairs there are of identical particles and a set of capital N, that's N times N minus 1 over 2, which is approximately N squared over 2 for big enough N. And this factor of 2 here is just because each annihilation depletes two dark matter particles. If I had a process that only depleted one, this factor wouldn't be here. Okay, so I'd say, okay, that's my evolution equation. At the moment, it's missing a term. Can anyone tell me what term is missing? Yep, exactly. So we also need to take into account that at least in the early universe, this process can run both ways. When the standard model particles have enough kinetic energy, have a high enough temperature, they will collide and make dark matter particles. So we need to include the inverse process. Now you might think at first, oh God, this inverse process is gonna be a bit of a pain in the neck. Do we have to consider every possible standard model final state and how it might go into dark matter particles? That's a little bit tough. So let me just write this term for the moment, for a moment, as the cross-section times X. This is the cross-section of the forward process times some X. This X is what I wanna understand, because it turns out there's actually a pretty simple prescription for what this has to be. So we know, important thing that we know is that X can't depend on the number density of the dark matter. It only depends on the number of particles, on what particles there are in the standard model, in the standard model bath. Okay, so it's just gonna depend on the temperature of the bath and the number of degrees of freedom in that bath. Okay, so we can write this side as Sigma B times X minus N squared, and this actually allows us to see immediately what X has to be, because suppose I make this cross-section really huge, so this process is really fast in both directions. Dark matter particles can collide and make standard model particles and the reverse can also happen. It's happening super fast, much faster than Hubble. In that case, the dark matter is very tightly coupled to the standard model photon bath. They should be in complete thermal and chemical equilibrium with each other. So in that case, we know what the solution is here, assuming that we, as Sigma B goes to infinity, the solution for the number density of the dark matter should go to the thermal equilibrium solution for dark matter, which is gonna depend on the temperature of the standard model bath. As we drive Sigma B to infinity, in order to satisfy this equation, the thing that Sigma B multiplies is going to have to go to zero. So as N is driven to NX of T, this corresponds to N squared must be driven to X. So that tells you that X has to be equal to just the equilibrium density of dark matter squared, which is a function of the temperature of the standard model bath to which it is coupled. So under the assumption of thermal equilibrium, so we can use this to just say, okay, in the reverse process, it doesn't matter what the details of the bath are, we can just write this balance principle, the reverse process as Sigma B for the forward process multiplied by the equilibrium density of the dark matter squared. So what does this equilibrium density look like? So it just follows the Boltzmann distribution, the relativistic Boltzmann distribution. Okay, so technically it follows the Fermi-Dirac or the Bose-Einstein distributions in a sufficiently dilute medium, we can approximate this as the Boltzmann distribution. So the Boltzmann distribution, again, I'm going to throw away order one factors from the very beginning here, so there are pre-factors here that I'm not writing, I'm just gonna show you the scaling with the parameters. And I'm only going to show you the scaling in the non-relativistic and the relativistic limits separately, if you have to do a calculation in the middle, then this gets significantly more complicated. So in the non-relativistic limit, but if you need to do a calculation where the freeze out happens while the dark matter's kind of semi-relativistic, then it's worth actually using the correct distribution for that machine. If you're actually in a condensate, then you should use the Bose-Einstein distribution, yeah, so as I said though, usually if you're light enough, that your number density is high enough, that you're in a condensate, then you're not in this thermal equilibrium regime anyway, at least if it's 100% of the dark matter. So yeah, I'm tacitly making assumption here that because the dark matter probably has to be heavier than about an MEV, that its number density is pretty small in the epoch that we're talking about, and so I can use the Boltzmann distribution. But yeah, if you're a dark matter is, if you think your dark matter's actually in a condensed phase, you should use the Bose-Einstein distribution. Okay, so this is the non-relativistic limit of the Boltzmann distribution. It applies when the temperature is much smaller than the mass of the dark matter. This is in the relativistic limit, parametrically, when the dark matter's much lighter than the temperature, then its number density, it's a relativistic species, it's radiation, its number density is just T cubed. So this exponential suppression, physically where this comes from is once the temperature of the dark matter, once the temperature of the universe drops well below the mass of the dark matter, then this process from dark matter to dark matter to standard model particles still continues fine, but the reverse process requires the standard model particles to have a lot of energy to be on the tail of their own distribution of energy in order to collide and make dark matter particles. So you can deplete dark matter efficiently, you can't restore it very efficiently, thus its abundance, while it stays in equilibrium with the standard model becomes exponentially suppressed. Okay, so now, so we have this equation to solve. The way that you do this calculation in detail is that you put in the real distribution for the Boltzmann equation, you put in the various pre-factors, which can also depend mildly on temperature because they include a count of the number of degrees of freedom in the universe, and then you just go ahead and solve this differential equation numerically. So what I'm gonna do now is just sort of walk you through what the solution to this differential equation looks like in different regimes. Okay, so let's first just look at this equation generally. So if I'm in the, so what's going to matter for the behavior of this equation is essentially how large this expansion term is compared to this term that governs the annihilation rate. So early on, you can have a situation where this term scaling with the number density squared, these processes, is extremely large. And as we discussed previously, when that happens, when the cross-section's large, when the number density's large, this systematically drives n towards n equilibrium. So early on, at late times, once the number density of the dark matter has been dramatically depleted, and this term we expect will eventually become unimportant, see annihilations become inefficient as the universe expands relative to this expansion term. And then we go back to this differential equation where we know the solution is just that n redshifts as a to the minus three. So the transition between these two is called freeze out. And what characterizes this transition is the relative size of this 3hn term compared to these terms with sigma v. So parametrically, this occurs when the hn term is similar to our sigma v n squared. We can cancel out the n's. So the crossover between early and late times occurs roughly when this criterion holds. Okay, so that's what we expect to see. So now let's think about two different possibilities for when this freeze out occurs. Broadly, there are two. Either this transition can happen when the dark matter is still very relativistic. So it still has just an abundance that looks like t cubed. Or it can happen at late times once the dark matter has become non-relativistic and is sliding down this Boltzmann tail. So let's first look at the first case. This is simpler where the dark matter is what we call a hot relic where it's actually relativistic at freeze out. We will see momentarily that in this case it can't actually be 100% of the dark matter. Okay, so in this case, okay, what happens? So, so okay, so we just, so roughly speaking the solution to this differential equation is that n follows its equilibrium value until this condition is satisfied and then it starts to behave like this. Okay, once it behaves like this, the effect of omega called dark matter is just fixed. The total abundance in matter is just red-shifting like one plus c cubed. So approximately an approximate solution is n is approximately equal to n f of t when sigma v n is greater than h and then approximately then at later times. So this continues down to what I'll call the temperature of freeze out, which is when this condition is satisfied t, so I'll denote that by tf. So then once this condition is satisfied, the, well the number density at that point is roughly the equilibrium density at t freeze out and then after that it red-shifts as, I want it to be a freeze out over a cubed. Okay, so all I'm doing here is saying it's equilibrium down to the freeze out point. Below the freeze out point, it red-shifts as one over a cubed to get the coefficient after freeze out. I'm just equating if I substitute a at freeze out into this expression and t equal t of freeze out, then I should match on to this solution. Okay, so that's our approximate solution. So okay, so let's do case one where it's a hard rally. So in case one, the equilibrium density at freeze out is just t freeze out cubed. So this says that before freeze out, the number density scales like t cubed and after freeze out, it scales like t freeze out cubed times a freeze out of a cubed. But how does the temperature red-shift with a roughly, a bit louder? For a relativistic, so that's true for a, for a non-relativistic species t red-shifts as one plus c squared, but for a relativistic species, so yeah, should have written this dm, relativistic. So for a non-relativistic species, the temperature just red-shifts with one plus d. So red-shifts is one over a. So actually in this case, both before and after freeze out, the number density is just red-shifting like one over a cubed. Freeze out doesn't actually make a very significant difference to how the number, to how the number density is red-shifting. Okay, so this makes it easy. This is our, so we can approximate this as, do this, yeah, so I can say, so in particular the temperature of the photon bath that we're looking at, the temperature of the photons of the universe, that just red-shifts as one over a. So this quantity here is approximately just the temperature of the photon bath cubed. Up to their effects that could heat the photon bath like positron and electron annihilation. I'm not taking those into account in this estimate. So this tells you that the number density of the dark matter today would be comparable to the number density of photons today in this hot relic situation. So at late time, number density of dm is comparable to the number density of photons. Does anyone know how many photons there are in the universe for every baryon? Yeah, so the ratio is 10 to the minus 10 baryons for every photon. So there's an order 10 to the 10 photons in the universe for every baryon. So this would mean that the number density of dark matter has an order of 10 to the 10 times number density of protons. But we know that the mass density of dark matter is only about five times larger than the mass density of protons, right? So this tells you mass density. So in this hot relic case, this tells you that the mass of the dark matter in order to not just totally over close the universe, the mass of the dark matter has to be less than about 10 to the minus nine times the mass of the proton, okay? So this hot dark matter scenario in order to not have way too much dark matter in the other universe. If the dark matter freezes out while relativistic, its mass scale has to be less than an EV, okay? But this is not consistent with our set of constraints that we talked about last time. We already argued that if the dark matter had a comparable temperature to the baryonic matter, it probably had to be lighter than an, it probably had to be heavier than an MEV and it definitely had to be heavier than a KEV or it would just be too fast moving. So this kind of dark matter is hot dark matter. This is like a neutrino. Neutrino is an example of hot dark matter and this is not allowed to be more than, so this should be less than about 1% of the DM. So the question was, is there a reason that we're not considering the decay of dark matter particles? Basically the reason is that we observed that there is still dark matter around today. So that and we observed that the amount of dark matter around today is broadly consistent with the amount of dark matter that we see from the CMB. I mean, I don't know if we can constrain if the amount of dark matter in galaxies today was several, was 50% different from the amount of the CMB epoch, but if it was orders of magnitude different, then it would change how structural formation operates. So that tells us that the lifetime of the dark matter should, has to be longer than the age of the universe today. I think if it's significantly shorter than the age of the universe, first we would notice the structure formation, but also if you, if the dark matter is decaying, anything visible in the standard model, you can actually say that its lifetime needs to be more than about eight or nine orders of magnitude longer than the age of the universe, just because otherwise if you convert a significant fraction of the energy stored in the dark matter mass into visible photons or electrons, it would just way overwhelm every other source of photons and electrons in the galaxy. So because we know, so now if we want the dark matter to have a lifetime, eight orders of magnitude longer than the age of the universe, these freeze out processes are occurring like well before the CMB epoch when the universe was less than a few hundred thousand years old, so the fraction that are decaying will be very tiny at that point. But you could have like a, you could have a metastable species that isn't the dark matter today that does exist in the early universe, and you could use this same kind of calculation to work out the effects of that species. That's a good question. Okay, so if we're not doing, okay, so this case one where it's a hot relic, I mean it's a reasonable calculation to do, it may describe some subdominant species of the universe, it doesn't describe 100% of the dark matter. Okay, so let's look at the opposite case, let's look at the cold relic where the dark matter is non-relativistic when it freezes out. So what does our solution look like in that case? Okay, so okay, so let's, so in this case the solution that we're going to have is again something along the lines of, so we're going to have a solution that is N is approximately, so if a temperature is greater than T freeze out and a temperature is below T freeze out, we're going to have a solution that is previously is this N equilibrium value at T freeze out times, what was that right before, A freeze out over A. So this is going to, this is gonna fix our late time abundance. So let's just look at this late time abundance and understand how we would impose the dark matter that this process gives you the correct relic density. So one way to specify the correct relic density is to say that we want this dark matter density to be approximately equal to the radiation density at the time of matter radiation equality, like the time of matter radiation equality is one measure of how much matter there is in the universe versus radiation. So to impose the correct relic density, we can require that at the temperature of matter, that at matter radiation equality, MRE, I mean matter radiation equality, so I'm going to assume here that this freeze out process occurs during radiation domination prior to matter radiation equality, as we'll see that is a self consistent assumption. People occasionally work on like modified dark matter models where there's a period of matter domination in the early universe by a species that has subsequently decayed and that can change this calculation. But so at the moment I'm gonna assume we're in radiation domination at this freeze out time, then afterwards the dark matter abundance red shifts like this until we reach matter radiation equality. Okay, so then I can use this late time solution. So this was my dark matter number density, so to get the mass density I multiply by M, we know dark matter is non relativistic at freeze out and afterwards. So I can use non relativistic results. So this is my, so I'm gonna take A, freeze out A, matter radiation equality cubed, and that needs to be comparable to the energy density of radiation at matter radiation equality, which is T, temperature of matter radiation equality to the fourth. Okay, now we also know that T varies inversely with A, so I can approximate this as T, matter radiation equality over T freeze out. Okay? And okay, so now I just need to, okay, so now I'm interested in constraining this and equilibrium at T freeze out. But by definition the criterion for freeze out is this statement that N sigma B is comparable to H. So we know that N at freeze out times sigma V should be comparable to H at freeze out by the definition of freeze out. So we know that N at freeze out is approximately N equilibrium at T freeze out. So we can just write this N equilibrium at T freeze out, we'll just write that this is the number density at freeze out because up until freeze out we follow the equilibrium solution. Okay, and furthermore we know, so in the radiation dominated epoch how can we approximate H at freeze out in terms of the temperature, bit louder? Yeah, good. So we can approximate this as T at freeze out squared divided by M plank. Okay, so now we can substitute that into this expression. So what we're gonna find is that the dark matter mass times N at freeze out is T F squared of M plank times sigma V. That times this temperature ratio give you this result. So this is the criteria that have to be satisfied by a freeze out process that gives rise to the correct relic density at late times. So this, so now we can start cancelling factors, we remove three factors of T MRE here, leaves one factor there, we've got a two TF squared here, one down here. So this is going to give us, okay, yeah, there's good, there's one more step. So this is gonna give us MDM over T freeze out is approximately equal to T MRE and plank times sigma V. Okay, and there's then one other thing that we can do here, which is to say, okay, we know that this freeze out occurs once the dark matter becomes non-relativistic. Once the dark matter is non-relativistic, its abundance starts to drop off exponentially. And as a consequence, this N sigma V quantity that we're comparing to H starts to drop exponentially fast. The result of that is that the temperature at freeze out can't be that much smaller than MDM because we have an exponential dependence on MDM over T in the object that we're equating to H over sigma V. If you actually do the calculation carefully, you can do an iterative calculation where you first assume that this, where you first assume that this ratio is one, plug that back into the equation, solve for freeze out and then you find that the second order of iteration that's about 20 and that's actually about the right number. So from a detailed numerical calculation, this ratio tends to be about 20 to 30. The freeze out occurs a factor of 20 to 30 below the dark matter mass. That's basically just because either the minus 20 is about 10 to the minus 10, which is about how much you need to deplete the dark matter density. If the dark matter's in roughly the sort of GV ballpark in a similar mass range to the ordinary matter, you need to deplete its number density by about 10 orders of magnitude from when it was behaving like radiation, behaving like a hot relic to get the right relic abundance. But for our purposes for order of magnitude estimates, we can say the freeze out has to occur right around the time when the temperature of the universe drops below the dark matter mass, we're gonna approximate this as one. That's about a factor of 20 error. So just for just, but parametrically, this is a numerical factor, which is not super large. So then this tells us that the only thing that you'd require to get the right relic density through this freeze out process is that the annihilation cross section that is processed be parametrically equal to one over the plank mass times the temperature of matter radiation equality. It ends up not really depending on the dark matter mass at all because the effects of the dark matter mass largely cancel out in this ratio. It's just this geometric statement. It's just this geometric mean, the two quantities. It's just that, so it's basically that this ratio is a log quantity because the criterion to get this ratio is that something along is that N at this time, so that MDM times T freeze out times E to the minus MDM over T freeze out. So that's N times Sigma V is approximately equal to H, which is like TF squared over M plank. So you're solving a transcendental equation for TF to get this, but to figure out what this ratio is, but it appears in an exponential. So suppose I did this iteratively, suppose I did okay, suppose I fixed TF here to be MDM because the place where this ratio mostly affects it is in the exponential, then I would take a log of this side of the equation to get MDM over T, okay? So MDM over T is a quantity that depends only logarithmically on everything else in the problem, so it's going to be a roughly order one quantity. In practice, because the amount that you need to deplete the dark matter density from its radiation like abundance is actually pretty large, you need to deplete it by about 10 orders of magnitude. Turns out the log is also order 10, so you end up getting MDM over T is about 20 or 30. For an order of magnitude estimate, I'm okay saying 20 is the same as one, but yeah, so to specify what I just wrote out, this means that MDM over T, if I take the log of these, then what I get is the log three over two log MDM minus MDM over TF plus log sigma V is approximately log plant, right? So if I would have solved this, all of this only depends logarithmically on the other parameters of the problem except this term, so MDM over TF is going to be approximately a log of. Is there was another question up there? So it's actually 20 to 30, but it's pretty hard to get a log quantity that is 10 to the six also. Okay, so what does this cross-section correspond to? So from yesterday, we know what the temperature, from yesterday's problem set, we know what the temperature of matter radiation equality is. Does anyone happen to remember from yesterday's problem set roughly what the temperature of matter radiation equality is? About an EV, good, great. Okay, so this, so the cross-section that we need in order to do this is approximately one over one EV for the temperature of matter radiation equality times the Planck mass, which is 10 to the 19 GeV. Let's just put things in EV units, make it easy. This is about 10 to the 28 EV for the Planck mass. So I can write that as one over 10 to the 14 EV squared, 10 to the 14. So this is about one over 100 TeV squared. So if I were to say, okay, suppose this is coming from some tree-level Feynman diagram, if it's coming from some tree-level Feynman diagram, I might expect the cross-section to scale like some coupling squared divided by some mass scale squared. So this statement about the cross-section size has nothing to do with particle physics. It's just a statement that 100 TeV is the scale that's halfway between the observed temperature of matter radiation equality in the Planck scale. But suppose this comes from a cross-section that is of the form some coupling, some dark coupling squared divided by some mass scale squared, then this would imply that the mass scale is of order, this dark coupling times about 100 TeV. Now, I've dropped like factors of 10 everywhere. So this may be 10 TeV, not 100 TeV. So this is the Witt miracle. This is what sometimes known as the Witt miracle, that which can also be stated as the weak scale is roughly in the ballpark of halfway between the temperature of matter radiation equality in the Planck scale on a logarithmic scale. So this is, okay. So if we take alpha of about 10 to the minus two, so similar to the fine structure constant of the standard model, then we would get a mass scale around a TeV and that would just automatically give us the right abundance. So, but that doesn't have to be the way it works. We can have a smaller coupling and a lower mass scale. This, so another thing that you can sort of guess from this, this strategy is only really going to work up to mass scales of about 100 TeV, at least for point-like particles. At that point, your extrapolated coupling becomes a strong coupling. And more formally, if you do this carefully, what you find is that to get a cross-section this large for a point particle above a hundred, much above 100 TeV would violate the unitarity bound on the cross-section. So, but of course, if your 100 TeV is a composite structure of a thousand, you know, of a thousand hundred TeV particles, then that's a different situation. If we double the cross-section, if we were to change, if we were to change the cross-section by a factor of two, that corresponds to changing the temperature of matter radiation equality that you would need by a factor of two, which corresponds to changing the abundance of dark matter in the universe by a factor of two, while higher cross-sections give rise to lower L densities. Okay, so this, so this process, this approach has a couple of nice features. It has a very predictive result for the cross-section that you need. So it uniquely predicts a thermorellic cross-section. You can look for annihilation with that cross-section in the present day using telescopes. The cross-section in this case doesn't really depend on the mass of the dark matter at all. It doesn't really depend on the particle physics model that you use to construct it. It's just a statement about what you need to sufficiently deplete the dark matter. And the cross-section that you need is not a completely crazy number. It's what you would, it's the kind of cross-section that you would expect to get from ordinary weak-scale physics. So this is the, went miracle, so right, so as I, well, so the fact, so what I just did tells you that if I were to take a generic dark matter particle interacting with the standard model through the W or the Z, so the interaction strength is weak-scale, the force carrier mass that you're dealing with is, well, okay, you have to, depending on the diagram, it may be the force carrier mass that matters, or it may be the dark matter mass that enters this calculation. But it tells you that that scenario would work pretty well. I didn't need to assume it for this, but that scenario, just like heavy dark matter at the TV scale, annihilating through the weak-age bosons, the standard model, has about the right cross-section to get the dark matter that we observed today. So yeah, that's why it's called the WIMP miracle, that this cross-section, which is really like just, as I said, the mass scale is really just the geometric mean of the temperature of matter radiation and the quality in the Planck scale, that's within a coupling factor of the weak-scale. But people often use the term WIMPs today to mean just more generally dark matter that is in this massive regime, so it's above an MEV, it can be in thermal equilibrium with the standard model at weak time, and it gets its abundance through this mechanism, through this channel. So any dark matter that has this kind of annihilation cross-section can be called a WIMP, even if it's going through a mediator that is not the W or Z bosons. It's like, this is sort of a difference in jargon. Some people use WIMP very specifically to mean interacts through the W and Z, and some people use it to just mean, has interactions of roughly this scale. So that's one way to get the right dark matter abundance. So now I wanna talk about the other classic example, which is axions and axion-like particles. So are there any more questions about this freeze-out estimate, this picture where annihilation controls the relic density before I go on and talk about something completely different? Yeah, the question was, can I say a little bit about how this violates the uniterity bound above 100 TV? So it's not exactly 100 TV, it's like 200 TV. You need to do a careful calculation to get it right. But so it's basically, so this tells you that in order to get the right relic abundance through this process, you need a cross-section of this size. If your cross-section is smaller than this, then you will have too much dark matter left over and you will over-close the universe, okay? If you have some other way to deplete the dark matter, then it can be possible to have dark matter with a smaller cross-section that still gives the right abundance. If you set that aside for the moment, you assume this is the only way you have to deplete the dark matter, then you must be able to get a cross-section of this size or larger. So there's a, so you can write down a constraint from partial wave uniterity that tells you that the upper limit on the cross-section as a function of velocity scales basically like one over K squared, where K is the momentum of the dark matter. So it depends on the dark matter mass. It's like one over M squared V squared. V, as we said, frees out occurs not very far below the mass of the dark matter. V is relatively large. So you, so you can just, so basically what you end up being is saying, okay, the cross-section cannot be dramatically larger than one over M squared, where M is the mass of the dark matter. Now, at low velocities, that's not true anymore. At sufficiently low velocities, the cross, the uniterity bound goes like one over K squared, so it's one over M squared V squared. I go to sufficiently low V. I can get very large cross-sections. So the uniterity bound doesn't, the cross-section could be much larger than this in the present day, but for this frees out calculation, you need it to be pretty large already at this frees out epoch. So it's just a statement that once you get mass as much above 100 TV, then the cross-section you need is higher than the partial wave uniterity limit. There's a limit on the, no, there's a limit on the coefficient too. I mean, it just comes from the optical theorem. It's basically just saying that when you, yeah, it basically just comes from a requirement the probability is conserved in the optical theorem, but yeah, it does constrain the scaling, but it also constrains the coefficient. It's like four pi over K squared and whether there's a factor of two or not depend. It's like four pi over K squared times two L plus one and whether there's a factor of two or not depends on whether it's a fermion. Good question. Yeah, I mean, the uniterity limit's kind of interesting because like the assumption that I did here sort of, I mean, this is the cross-section at frees out, but you might say, oh, if I make this cross-section like extremely velocity dependent, so it's getting higher and higher at lower temperatures, then can I sort of delay the frees out for a long time? My number of 20 to 30 here mostly does apply to the case where the dark matter has a velocity independent cross-section. But maybe I can play with this ratio. But again, I mean, because that ratio really like appears in the log, it's hard to make that ratio really dramatically different. But taking into account the velocity dependence of the cross-section up to the uniterity limit and the possible formation of bound states does change the uniterity bound on the mass by a factor of two or three. So there are some recent calculations of this in the literature. So it basically, so at a first approximation, you can just take, so this sigma v that I'm writing down here, it's basically the sigma v evaluated at the frees out temperature, which is usually a factor of 20 or 30 below n. So the velocity dependence will come in there, but only in the sense that if there's a low velocity enhancement, you should evaluate it at the velocity relevant to a temperature a factor of 20 or 30 below the dark matter mass. But in terms of how it quantitatively changes this calculation as opposed to just rescaling the cross-section, so it can make this ratio a bit larger than 20 to 30 is one thing that it can do. It can cause frees out to be delayed for longer because if your cross-section is increasing as you go down in temperature, then you can stay coupled for longer. But it's, so if you have a very strong velocity dependence turning on, you can sometimes get situations where the annihilations recouple at late times or in regions of high density, so then you get extra depletion of the dark matter at late times, and that can then make a non-negligible difference to the cross-section. But you need a scaling that over a narrow, over the velocity range that you're looking at, you need a scaling that at least briefly violates, well, violates the uniterity bound as the wrong word because it's never actually larger than the uniterity bound, but it's scaling with v in a way that if that scaling with v continued down to arbitrarily low velocities, it would violate the uniterity bound. So you see this usually where you're sitting right on top of a resonance or something like that. Like if you're sitting right on top of a bright Wigner resonance at some velocity, then you can get sort of a spike in the cross-section with velocity, and if you're sufficiently close to the center of the resonance, that spike can sometimes be large enough that you get like late-time annihilations that deplete the dark matter density by an order one factor. So that's a late-time perturbation for this. Yeah? Okay, so the question is, how would this dark matter couple of the Higgs boson start? Right? Yeah, so given that its mass could be rather high, how would this couple of the Higgs? So it's not known. I mean, it could, in principle, get its mass through the Higgs mechanism, but it could also get its mass through some other channel. And we're usually talking about, in the example I did where dark matter is its own antiparticle, you're probably talking about it being a real scalar or a Majorana fermion. So I mean, you can write down masses for these objects that don't go through the Higgs boson. So either the dark matter, so if the dark matter has like a directory-level coupling to the Higgs, then you would worry about constraints from direct detection experiments, I think, where the dark matter can scatter off visible particles by exchanging a Higgs boson. The limits from that are sufficiently good now, at least above a GeV, that I think it's hard for the dark matter to have a big direct coupling to the Higgs. But yeah, it's not known, so that's a model-dependent question. Okay, sorry. So the question is like, would we expect corrections to processes involving the W and Z from their possible coupling to the dark matter? So yeah, I mean, so there are precision constraints on new electroweak physics. My understanding is, so if you're really talking about a really classic WIMP that couples directly to the W and Z bosons, there are fairly stringent constraints on that parameter space from direct detection and from the LHC. But examples of survivors, if you have nilipur-wino or nilipur-higgs-ino dark matters, these are the super partners of the W and Z boson. They are in low-lying representations of SU2, they couple of those gauge bosons with full strength. The masses that you need to get the right abundance for those particles are about one TV for the Higgs-ino and about three TV for the Wino. And as far as I know, they're not ruled out by any direct or collider or precision such, like they're heavy enough that it's, it's definitely like something you can look for and something that we do look for, but they're not at sufficiently high mass scales, they're not testable at present, but that's the TV scale. Okay, so okay, so now let's talk about something completely different. And talk about axioms. So I will briefly talk about the strong CP problem because I think other people are going to talk about in more detail, then I'll talk a bit about the cosmology. So the strong CP problem is at a basic level that the standard model Lagrangian should in principle have a term, a CP violating term that looks something like this. But this is the glue on field strength. We know the CP violation in the quark sector of the standard model. There's not any obvious reason for this parameter to be zero. But if this object, if this term is present, it induces a neutron EDM, a neutron electric dipole moment of order 5.2 times 10 to the minus 16 times theta, e centimeters. So this seems fine. Okay, we should measure the neutron electric dipole moment and then we can measure this parameter theta. Fortunately, what we have at the moment is an upper limit on the neutron electric dipole moment. There may be a more, there is probably a more up to date upper limit than this. This is, I got this upper limit a few years ago. But at the time, the upper limit was three 10s into the minus 26 e centimeters which tells you that this theta parameter is less than about 10 to the minus 10. We don't have an obvious reason for why this parameter should be less than 10 to the minus 10 within the standard model. So why is this so small? So there are a number of possible candidate solutions to this problem. But the one that Axion physicists like and it is a nice elegant solution is to say, okay, maybe this parameter is so small because it's not just a fixed parameter of the standard model, but it actually represents some dynamical field. And that field is driven to a small value. So what we can do, so the Axion solution, this is called the, so this strictly is the QCD Axion solution. People talk about axion like particles where this is not their motivation, they just have similar properties to it. The QCD Axion solution is to replace the parameter theta by a field which I'm gonna call A over FA. So A will have mass dimension one. FA will be some scale. So, so this is the field. I think actually my notation is kind of terrible here because this also means my cosmological scale factor. I'll put a tilde over it to be the field rather than the scale factor. And this is, so this is a field, the coupling is one over FA. Okay, so there is an effect of potential for this axion which may be derived in later lectures today. I'm not sure how much they're gonna do. So I'm just gonna quote it, but there's a good, there's an old but decent review on the strong CP problem by the, by Dein and Michael Dein in TASI lecture notes which is pH 001, 1376. So from these notes and you may also see it derived later today, we can derive an effect of potential for this field which looks like a cosine potential. So it has coefficients which are set by the parameters of QCD and then it has this cosine form. So it looks like this. So the basic idea of the axion solution is that I can start this field A out anywhere. You know, maybe I started up here, maybe I started up here, but if I start it up here, so this is the value of the field A over FA. So I can start it here or here or here, but then the field will roll down this potential in the early universe and wind up at a value of zero. They're in principle of the minima, but we're only gonna focus on the case where the closest minima is the minimum at zero. So the axion solution is that A can evolve to a ward. A equals zero, which is a local minimum, and thus you can drive this parameter that controls the level of strong CP violation towards zero. So that's the idea of the solution. So let's look at this minima at this minimum at N equals zero. We can expand the potential around this minimum. So if we expand this potential around this minimum, the leading order term is gonna be like, is gonna have an A squared piece in it. So this is an effective mass term. So doing that, we can pull out an expression for the mass of the axion, which just depends on this coupling FA and various parameters of QCD. What we get through this estimate is the mass of the axion. So this Mu and Md are the masses of the up and down quarks. This M pi is the mass of the pions. This F pi is the effective coupling of the pions. So the actual numbers for these are the F pi. This is the pi on decay constant. It's around 93 MeV and M pi is around 135. So these are all known parameters except FA. So the main thing for you to take away from this is that there's an inverse relationship between the mass of this object and how strongly it couples to the standard model. We replace this theta with A over FA. One over FA controls the coupling strength. High FA means a weak coupling and high FA also means a small mass. So as you push the mass smaller and smaller, that corresponds to a more and more weakly coupled object. This is good for us. This is great for dark matter because we know that very light dark matter had better not be in thermal contact with the standard model. So it should be cold, it should be isolated, it should couple very weakly. So this is attractive. We can put in the numbers for this explicitly and we get something like, so why have I chosen this particular number to calibrate to? Well, we'll see that in a moment. But so what this tells you is that for order like milli-EV electrons, you're talking about an effective scale for the coupling of 10 to the 10 GV. So this is a very light particle connected potentially to very high-scale physics. So let's think about this particle as a dark matter candidate. This bottom line, why do axioms weak a coupling? Okay, so if we look at this as a possible dark matter candidate, let's think about what mass scales are allowed for it. So first question you might ask is okay, well, can this field decay? As we said earlier, if we wanted to be dark matter, it had better not decay within the lifetime of the universe. So is this field stable? Can it decay? In general, as you start with your dark matter candidate, one of the first questions you wanna ask is it stable? Well, so it has a coupling, the axion we know has a coupling in the Lagrangian that looks like A over FA times G mu nu, G mu nu. So yeah, this axion it can decay in just standard model stuff that could potentially decay into, well, depending on how heavy it is, but in general axions are not stable and in general they can decay into photons. So now the question is a quantitative question, which is what is the lifetime of this decay? So you can compute this, you can draw the diagram, axion to two photons and you find that the lifetime for this decay corresponds to, so it's controlled by this coupling, this one over FA controls all the couplings to the standard model. The lifetime turns out to be about 10 of the 24 seconds for EV axions. Okay, so it's controlled by the coupling but not just by the coupling. If my notes are correct, which I will double check, this has a minus five factor in there. So it depends on both the coupling and also separately on the mass. So okay, sounds good as any, so the lifetime of the universe in seconds is about, is a few times 10 to the 17 seconds. So this is good for EV axions, but there's this very strong scaling with mass. If I take my mass to be a KEV, so it's factor of 1000 here gets raised to the fifth power that's 10 to the minus 15 and suddenly this is no longer more long lived than the age of the universe. So what this tells us is that in order to not be stable on time scales of the lifetime of the universe we require MA must be lighter than about 20 EV. Okay, so this has to be, this has to be pretty light. So if it was thermal, it would be, so okay, so okay, so that, okay so we're definitely in this less than 20 EV mass range. So then the next question we might ask was, does it come into thermal equilibrium with the standard model? It had better not based on this, right? We know it has to be lighter than 20 EV so it can't be a thermal, it can't be a thermal relic. It would be too fast moving. So that's question one, model building. Question two, does it equilibrate with the standard model? Again, this process is controlled by this coupling one over F as we push the axion mass down, we push F higher, we push the coupling smaller. So the question is gonna be over what mass range does it equilibrate with the standard model? It turns out that the answer is yes, if the mass is less, if the mass is greater than about 10 to the minus three to 10 to the minus two EV, then it does equilibrate with the standard model. So in that case, the axion is hot DM. Now we said earlier that sort of one EV was about what you needed for a hot relic to be 100% the dark matter and not over close the universe. So if this is 10 to the minus three or 10 to the minus two EV, it can be a 1% component of the dark matter of the universe. It can be a hot relic, like that's fine. It's not excluded, but it's not all the dark matter. So if we want the dark matter to be all, if we want the axion to be all the dark matter, we need the mass to be lighter than about one milli, than about, sorry, 10 to the minus three EV, lighter than about one milli EV, which is why probably why I used a milli EV as the calibration for mass over here. Okay, so now for axions, we're looking at sub milli EV particles, which have never equilibrated with the standard model. So how does their abundance work? Now you might say, I mean, the first time I heard about axions, I was like, wait, hang on, how is this even called dark matter? We're talking about something that is lighter than a milli EV. The temperature of the universe today is two times 10 to the minus four EV. I mean, it seems like this stuff will still be relativistic, right? I mean, it's lighter than at least one of the neutrinos. So, so no, for it to be called dark matter, we want it to be highly non relativistic today. It's just that while this is very light, it's also very cold. And that's fine if it's never been in thermal contact with the standard model. Okay, so for this case, for it to be 100% of the dark matter, we need it to be light and cold and non thermal. So let's think about how this axion field, which is rolling in this potential, would evolve in the early universe. If the axions are this light, it is actually reasonable to think of them as fluctuations on a classical field. We can look at the evolution of the vacuum expectation value of the axion field, and that will tell us most of what we need to know, because we are talking about very light, very low momentum, very cold particles. So let's look at, so let's consider the axions as a classical scalar field evolving in this potential field. Technically what we're looking at is a field, which I'll call it this, which is a function of time and it's the expectation value of my axion field. Okay, so the equations of motion for this field, we need to take into account that it's evolving in the potential, but simultaneously that the universe is expanding. So we can do this just by writing down the equations of motion, the other little grand picture equations, from the Hilbert axion. Again, this is kind of a sketch. I'm just gonna write down for you what they look like. So this classical field in the expanding universe has its kinetic energy term. It has a term corresponding to the Hubble expansion. This is like the 3HN term that we saw earlier. This describes the dilution of the field and it has a term that comes from V of theta. I'm gonna set that equal to zero. So close to the minimum of the potential, we can approximate this potential as just being given by the theta squared term. So just being given by the a squared term in this object. So, and that's exactly the same as what gave us the axion mass term. So close to the base of the potential. Yeah, sorry, that should have been a, I think that should have been a V dot, not just a, yeah, not just a V. So this is the term that comes from the shape of the potential near its minimum. Okay, so these are the equations of motion. This is what we wanna solve. So again, we can, no, there's a subtlety here for the QCD axion. The subtlety for the QCD axion is that this mass is not constant with temperature. You see here that this mass that we wrote down, it depends on a bunch of parameters, it depends on the pion mass, it depends on the pion decay constant, it depends on the masses of the quarks. Above and below the QCD scale, the pion mass is different. So the axion mass has a pretty non-trivial temperature dependence around temperatures comparable to the QCD phase transition. It's effectively massless above the QCD phase transition. It picks up a mass as you go through it. So I'm going to at least temporarily write this like it's a constant, but like keep in mind around the QCD phase transition, that's actually not true. So I'm doing a simplified version of this calculation. Okay, so let's again look at this equation of motion and think about how this should behave in different limits. So now where previously we said we need to compare like the HN turn and the sigma VN squared term, now the comparison of these two terms determines what kind of behavior dominates the dynamics. So and again, there are broadly two regimes. When the mass of the axion is much less than the Hubble parameter, then we get a situation where this term can be effectively neglected. A solution to this equation is just that this field is a constant, okay? All the other equations of motion are just derivative terms. A valid solution is just that this is constant. So we say that this field is frozen at whatever its initial value was. And for the QCD axion, this is broadly true, this is always true before the QCD phase transition because it doesn't have any mass. So then once the Hubble parameter falls to a point where it falls, drops below the axion mass, then the field begins to oscillate in its potential because if we could completely ignore this Hubble parameter, then this differential equation, we know how to solve the second derivative of an object is equal to minus ma squared times that object. This is just an oscillatory solution. So if we were to drop the h term, the solution to this would just be that the state of field is oscillating with a frequency of ma. If I have field begins to oscillate, so at zero-th order, we would say the solution to this looks like that's how we would usually write the solution. So we have to actually do a little or cause MAT since depending on what our initial conditions are. So we can do a little bit better than that. We can't drop this cosmological expansion term altogether because it's still present. So the approximate solution in this oscillatory regime, so the approximate oscillating solution is that this is an oscillatory peak is, we'll just have an overall pre-factor times a slowly varying function of t, which describes the Hubble expansion, times an oscillatory piece, which is where the period of the oscillations is governed by the axion mass. And you can show, again, I'm just kind of sketching the calculation, so I'm not gonna do the details for you, but you can show that this scales as a to the minus three halves. Okay, so if we say that this, so if this were to start at t equals zero, so this theta naught is just describing the initial condition from the frozen in value. So the evolution of this field in the early universe is it starts out as just a constant field, but once the Hubble parameter drops below the mass scale, then we begin to oscillate, then the field begins to oscillate, and it also begins to evolve with this eight of the minus three halves scaling. Okay, so now we're interested, when thinking of this as a dark matter candidate, we're interested in understanding how its energy density behaves, right? If we wanted to behave like matter, its energy density had better be redshifting like eight of the minus three. So we can look at this. So the energy density is the potential energy stored in the field plus the kinetic energy of the field. Again, close to the base of the potential, we can approximate the potential energy just by the mass term, and then the kinetic energy term comes from the time derivatives of this field object. So if you, so when you take this kinetic energy term, well, you will get some extra terms that come from derivatives, that come from derivatives of this f of t object, but this is pretty slowly varying, and so we can drop these to our first approximation. So what happens when you expand out this calculation is essentially that, so the main part of this derivative term comes from differentiating this cos term, so we get a sign, we get something that looks very similar for theta dot, except that instead of cos mA t, it's mA times sine mA of t. So what this gives you is approximately, and so the oscillatory pieces cancel out here. This oscillation is just between the power being stored in the field and the power being stored in the potential and it being stored in the velocity of the field, and so you end up with in the end just mA squared. f of t squared. So the time dependence in the total energy density is partly controlled by the mass of the object, but at late times that mass will be almost temperature independent once you get down through the QCD phase transition and the mass of the pion doesn't change very much, so this will just be a constant. This is just an initial condition, and so your scaling is entirely set by f of t squared, which scales as a to the minus three. So the energy density stored in the field once it begins oscillating at late times, indeed red shifts like you would expect from matter. So a classical field like this, evolving in a potential that is quadratic around its base, behaves like matter from the perspective of the expansion of the universe and it's not coupled to the standard model, so it doesn't experience radiation pressure. It can be very cold because it was never coupled to the standard model, so we can choose its momentum to be very small, we can choose this classical field approximation to be good, and so despite the fact that this is very different from our earlier picture, like heavy wimps or heavy particles floating around in space, this kind of scenario actually works very well as a candidate for cold dark matter, so this can act as CDM. Well, modular one thing that I haven't said yet, which is what is the actual abundance? So you can see from here that this late-time energy density, it's partly determined by the axion mass, but it's also just determined by this initial condition. I just, so this is dimensionful because this is the field, so we can write theta naught as, this was the initial value of the A field, so the initial value of the A field, so it was cos A over FA, so this A over FA can vary between pi and minus pi, so let's define a new angle, which is theta equals, so this is actually a dimensionless angle, it varies between pi and minus pi, so then we can write this energy density rho as being proportional to MA squared, FA squared, theta squared times something that scales like A to the minus three, so in the axion case, the abundance is, well, so there are two scenarios for this axion case, one is where the value of this initial condition, this angle is determined prior to inflation, let me just find this, so if this misalignment angle is set before inflation, then inflation will spread out the value of that angle over all of our observable universe, so everywhere we look in the universe, we would expect this initial condition, this initial angle to be the same, in that case, this can really take any value, we don't know what it is, you might say that it's more natural for it to be an order one value, but in principle, and it can't be larger than a certain level, but in principle, I can choose this angle extremely small and there's nothing wrong with that, people sometimes talk about an anthropic selection of this value, that if there are patches of the universe that have a very low value of theta and a very high value of theta, and the other axion parameters are such that we need it to be a very low value that maybe humans only exist in the inflationary patches that had a very low value to begin with, so in this case, it is possible for the abundance to just be completely determined by an initial condition, so this is called the misalignment angle, and this is sometimes called the misalignment mechanism. I realize that it's lunchtime and I'm just gonna take a couple more minutes so I can finish up this story. The other possible case is that this feel, this theta value was set subsequent to inflation, in which case, so after the universe that we see today is comprised of many patches that were causally disconnected in the earlier universe, so those patches could each have developed their own values of theta, and the universe that we see today would be looking at the results of some kind of space averaging over possible different values of that initial angle. Now, if that's the case, then we would sort of expect theta to be, the average value of theta to basically just be the RMS mean of theta squared over all possibilities, which is an order one number. However, in this case, the calculation gets more complicated because this classical field now has different values in different parts of the universe. When those causal patches start to connect to each other, you can get strings and domain walls, topological defects, corresponding to the variations in this classical field. And I'll say that our host Giovanni Velladoro has done a lot more work on that particular scenarios like this than I have, so if you want to know more detail about it, you should ask him. But so in each of these cases, we can do a more careful calculation than I've done here where you carefully take into account the temperature dependence of this mass, you carefully work out the, and the temperature dependence of this t factor, and try to work out what the late-time abundance of the axiom will be. It more or less follows this, you just have to be careful about the mass dependence of the axiom through, well, okay. In the case where the theta angle is set before it frees out, there are no complicated topological defects, so you basically just have to carefully follow how the mass of the axiom varies through the QCD phase transition. In that case, the PDG quotes the relic density of axioms versus the relic density of DM of 1.16. So that you get the right abundance with an order one theta angle for a mass of about six microelectron volts. If you're willing to go to a smaller theta angle, then much smaller axiom masses like that can also be viable. So this is one theta set before. For the other case, where the theta can vary over a wide range of patches, then there's a recent paper trying to simulate this following up on previous simulations. So the most recent one I know of is this Bushman et al paper, which came out earlier this month, which estimates, yeah, sorry, I'm gonna write this just in terms of the, so these are similar dependencies because recall that the mass of the axiom is inversely proportional to the F of the axiom. They find that you get, in this case that you get the right axiom abundance for this average theta value with an axiom mass of about 25 microeV. Now, I should note though that there are some other, this simulation assumes that there's actually not, so I said that you form strings into main walls. When those objects, when those topological defects decay away, they can produce more axioms and that can potentially be a significant contribution to the abundance. This simulation finds that there's not a big contribution from those kinds of effects, but one of our hosts here has written a paper 1806.04677, which suggests that this effect can potentially be larger in which case in this case, you should, this is, this may be sort of a minimum amount of axiom abundance. You could potentially have a larger number as well. So, we've talked today about two very different ways of getting the dark matter. One is you have a large bath of dark matter in thermal contact with the standard model in the early universe. As it becomes non-relativistic, its abundance is depleted and eventually hits this thermal relic density floor. In that predicts the annihilation cross-section we should see today. And then we've talked about this axiom scenario where a classical field sloshing around the bottom of its potential also has the correct cosmological energy density scaling to behave as called dark matter. And for micro-EV scale, tens of micro-EV scale and smaller masses can potentially be a good dark matter candidate. Questions? You have time for one of two short questions. Thank you by the way. Has one question there, I think. Thank you.