 Is that better? Okay cool. Okay, so today... following on from Tracy's excellent lectures, I'm going to talk about looking for dark matter. So, in the previous lecture we were talking about looking for new physics at much higher scales, not less, so to say, with the dark matter at all. Here we're going to talk about how we might actually look for o dark matter in the lab. Now you've heard in previous lectures about all the evidence for dark matter, but so far that evidence is purely gravitational. It all comes from looking at the motion, presumably under gravity of standard model matter on large scales in the universe. For all we know dark matter may interact purely gravitationally and in that case it's going to be extremely difficult to get any more information about it, apart from these better out of physical probes. But like you learned in the previous lecture, there are lots of well-motivated models, for example, electroweat scale particles of various kinds, the QCT axion, and production mechanisms for dark matter, such as the thermal freeze-out mechanism that you are taking through, where you naturally do have a coupling to standard model particles, which is more than just the necessary gravitational coupling. And if you have that, you can try to look for the effects of this in the lab. We presumably have some local dark matter density. We've certainly measured dark matter density in the sort of local environment of the galaxy, meaning the sort of nearest few kiloparsecs or so. And in almost all settings, unless something extremely strange is going on, there's dark matter floating around in the neighbourhood of our solar system and usually through us at this very moment. So if this interacts with standard model matter through non-gravitational means, we can try and probe this new interaction via the lab, via things in laboratories. Okay, so the usual way in which you hear about this, and the sort of best developed experimental program so far, is the program looking for wimps, looking for heavy particles, which will very occasionally slam into a standard model nucleus or particle atom, and leave a lot of energy which you can then detect. So they have these massive underground detectors now reaching multi-ton scales in which they're looking for these very rare events. And that program has been an extremely impressive experimental achievement and has ruled out many models, but there are still things in which it can potentially find. And that's, I think, something that you'll be hearing about tomorrow a bit in Chasers Lectures. What I'm going to talk about is a somewhat different experimental program looking for much lighter dark matter, in particular the kind of light bosonic dark matter that you heard about, exemplified by things like the QCD axion, where instead of thinking of dark matter discrete particles, it's more like something like a radio wave, some coherent oscillation that is high occupation number. And for those kind of things, there are very different ways of trying to look for this kind of dark matter. So, yeah. I guess just as a sort of general introduction to the sort of parametrics we're thinking about and to sort of follow on from the calculations that Tracy was doing earlier today. So, like she was saying, one of the ways in which the easiest way is to get a population of light bosonic dark matter is through what's called the misalignment mechanism. So, in the early universe, set up by inflation or whatever, let's say we have a scalar field phi. We have that phi is approximately equal to some constant value phi zero at early times. So, throughout the whole universe, then once a Hubble scale is less than the order of the mass of this particle, it starts oscillating in its potential, and at later times, you get that it's suppressed by 2 over 2 there. So, you get some dilution of this thing as the universe expands, and you have some fast oscillation as well. Tracy took you through the case of QCD axion where you have a very definite prediction for the mass of this thing, and you have some kind of prediction for the initial abundance, but just more generally, if we allow these things to be whatever, what kind of values do we get? So, let's try and look at the time of matter radiation equality, say, because that's an easy time to compare things and see what the abundance we've got is. Okay, so at A of matter radiation equality, we've got the tower energy density in our phi field. So, this is parametrically set by m phi squared, m phi squared, phi squared. Then you've got the kinetic term, which just takes out the oscillating part of that. So, this is m phi squared, phi zero squared. In radiation domination, the scale factor goes like the temperature up to numerical factors on the number of species. So, this is then tier quality over the temperature which it starts oscillating all to the 3 over 2. So, that's parametrically number density that we're going to get. The time at which it starts oscillating is when Hubble is of order m phi. So, we've got that Hubble oscillation squared, which is m phi squared, and that should be our T oscillation to the 4 over m planks squared. So, just to explain this, this comes from the equations from Ffw universe. You've got that your Hubble parameter squared is 8 pi g over 3 times the energy density. g is 1 over m planks squared. Your energy density parameter actually goes as T to the 4. So, that's where this kind of thing is coming from. So, just going over the same kind of calculation, we're just sort of repeating it. Yes. Oh, yes, absolutely. So, we want m phi squared to be of order of the Hubble at which it starts oscillating squared, and this, by the Ffw equations, is proportional to the T at which it starts oscillating to the 4 over m planks squared. OK. So, plugging this all back in here, that means that our energy density in this field, so we've got our m phi squared phi zero squared. We've got our the equality there to our, oh sorry, we don't have the twos of course because we squared it all. We've got our energies, the equality cubes, and then we've got our, so our T osc there was our m phi squared m plank squared. Now, so that was to the 4, so we want this to be 3 over 4. So, putting this all together, we get that our thing is m phi to the half phi zero squared divided by T equality m plank to the 3 over 2 times T equality to the 4. So, this is the characteristic energy density of both matter and radiation around matter radiation equality. Again, we're ignoring numerical constants. This is just to give you parametric field things. So, plugging in numbers this T equality to the quarter. We'll plug in some that we take this to be as low as it can be. So, around 10 to the minus 19 EV. I think Tracer talked a bit about how structure formation means that both line of ground matter can't be too light, otherwise things start going wrong. And then the phi zero that we get out should be about 10 to the 16 gv in order to make this work. So, the point about writing this is just to illustrate that in general, so fairly in general, not just in the QCD axion case, which is a sort of specific example where you have slightly funny behaviour, if you want that your dark matter particle is light and you want the scales associated with at least its initial value to be rather large. And this points you towards sort of kind of models where you naturally have some light particle with coupling suppressed by some large scale and the same kind of thing can occur in a number of other models. So, we're looking at those kind of models and, like I said, that leads you towards things that can be good dark matter candidates because they are light and weakly coupled, so stable and they can be cold dark matter through this kind of misalignment mechanism. Okay, so, now we want to talk about okay, how are we going to try and detect these things in the lab? Ocelating background field, which is just all around us, how are we going to try and detect it? So, in the QCD axion case, what we did was we took our theta CP, which our term, which in a standard model of grungian with a GG tilde and we promoted this to some field where it was now theta CP plus some axion field which depends on time and space. So, the simplest form of couplings is a generalisation of this. So, we've got other terms in the standard model of grungian. For example, we have the kinetic term for the photon and this has got usually a value of a quarter. If we stick and we promote this to a field, so we've got some phi of t, of course this is divided by some scale of phi of t and x over some scale lambda then we have some interaction of this field phi with the standard model in a different way. Similarly, if we can interact with phi times the fermions giving varying fermion masses etc. So, with this general idea occurs in many kinds of theories, the standard model couplings which look constant can also have influences from fields which vary in space and time. So, in particular, a dark matter field which is oscillating will look like in a lab like an oscillation of what we think of as fundamental standard model constants. So, how do we try and look for this? Well, basically we just sort of take the kind of techniques that we use to perform extremely precise measurements within the standard model and modify those to look for things like oscillations. And one of the most precise sorry notes are out of order. There we go. I will have to remember this part of the talk because we have a slight lack of notes for this. Okay. So, let me just see if it is looking about anywhere. These things happen anyway. Not quite what I expected. So, one of the most precise kind of instruments we have are atomic clocks. So, I am going to talk a bit about how we can use those to look for this kind of time variation of fundamental constants. So, an atomic clock is basically we want to very precisely measure time using the fact that the transition energy of some atomic system, so we have say we have our atom, we have electrons going around, and then we have some excited state which they can go up to and this is sort of some energy splitting. And this energy splitting is pretty robust to external effects or can easily be made so by making a very clean environment in your laboratory. And the way that we can use this to measure time is like we were talking about in the EDM experiment yesterday an interference type experiment. So, let's say that we have our two states we'll call them 0 and 1 and this one has energy E0. This one has energy E1 E0 plus delta E. Now so let's say that we start with our atom in our ground state E0. We then kick it in some way we apply some laser field whatever in a way that is called a pi of a 2 pulse. Which is exactly set up so that it does the following transformation. So it does on our state. This is our 0 state this is our 1. So it takes our 0 state and it puts it into a superposition of being in the ground state and in the excited state. So that's the first step. We need to set up our superposition. Now once we've done this we leave it there for some amount of time. So after time T this goes to 1 over E2 of course E to the I E0T just the usual phase evolution but the excited state has accumulated some extra phase relative to the ground state. Now if we try and bring it back we apply the inverse operation to this so we take our 1 over E2 1 minus 1, 1 1 to our new state which is up to phase just our 1 and our E to the I delta ET then we get 0 and minus 1 and then we get the part which includes the extra phase so then our 0 plus 1. Okay and so expanding all this out we've got that our 1 over 1 plus E to the I delta T in the original state and then our E to the I delta T E delta ET minus 1 over 2 in the excited state so if the delta E was small or other way we can just expand this out in sines and cosines so this is just our so 1 plus cos et cetera but if the phase is small then this is approximately just our 0 state minus sorry plus I delta E T over 2 times our other state so we see that we have a transition probability from this the probability that we went from in this whole sequence of pulse, leave it and then pulse again this gives us a transition probability from the initial ground state into excited state which depends on the energy splitting and the time that we left it so this kind of thing is basically at the basis of atomic clocks being able to tell very precise time for an energy splitting which is typical for say an optical transition so our delta E of order electron volts there then delta E times T is of order 10 to the 15 if T is a second so even a tiny like difference in time leads to a large difference in phase so that's effective this is the large number that's coming in okay but we're not actually going to use it to tell time and see if there are any extra contributions to this that we didn't expect just in the standard model so the way you'd implement it is that you'd shine a laser at it which was tuned to the frequency of the transition so in all of these things I should say that you want this state to be rather stable if it decayed during the time that you were leaving it that screws your whole thing up so this state should be from here to here should be some forbidden transition which means it doesn't want to do it spontaneously but if you hit it with a large enough number of photons i.e. a laser at exactly the right frequency then just through Rabi oscillations you will transfer some of the amplitude from the initial state into this state so you hit it with a laser pulse at the correct amplitude and duration to give you this exact Rabi oscillation so we can just put this all in here and you'll find that it comes out as it'll give you if you had an extra angle in here it'll give you an extra angle between these so this is like an angle phi some area here will give you an angle like that so in these sequences you generally want to be very careful that yes you're implementing it in some way that is robust to the lasers that you're using having some noise or having some offset etc so of course many experimental details to do with some way that that all counts out in the final measurement etc but this is the sort of extremely schematic overview of what happens ok so this is the kind of interferometry experiment in some sense that an atomic clock represents so how can we try and use this to look for variation of our constants so the overall point is going to be that the energy splitting of the levels of our atom or whatever depends on the standard model parameters it's going to depend on so delta E depends on alpha em it depends on the mass of the electron depends on the mass of the proton etc all of these things so if we vary them so our delta E is going to be our delta E zero plus our dark matter field phi of TNX over lambda times our cos M phi T where some constants out here plus dot dot dot where this parameterises how sensitive this particular energy level is to the variation now of course if all of your energy levels changed in the same way that would be rather hard to tell you'd have one clock was going a bit faster than another but how one clock was going a bit faster but all the other clocks were going faster as well so it would be rather hard to tell the difference there you'd need to have one area with a different dark matter density compared to another or something that would be difficult but helpfully different transitions depend on the constants on alpha, em, em, etc in different ways for example if we have a transition between different radial levels of an atom then the energy of state with radial number N minus the one with N primed is something that goes as well again I've lost my things with all the actual numbers in but proximately this is our e to the 4 mass of the electron over stuff with pi's 1 over N squared minus 1 over N prime squared in the usual Bohr model way whereas if we have a hyperfine splitting so delta e hyperfine this depends on the spin-spin interaction so we've got a dependence on 1 over Me and 1 over the proton mass that's giving you the Bohr magneton for the nucleon times appropriate dimensional thing so Bohr radius cubed or whatever so when we work all that out there's something that depends on the mass of the electron and the mass of the proton in different ways and e the electron charge to a different power as well of course with many constants here so if we have different levels which depend on different spittings which depend on the constants in different ways then if we take sort of delta e1 over delta e2 then this will be say that this goes as alpha to some power like alpha to the 4 then if alpha say that we have some alpha variation which is alpha 0 1 plus g alpha phi cos stuff then this will mean that this has some variation in time which is 4 times this thing so we get some oscillation of the frequency ratio of these transitions which will vary in time so if we have two different clocks depending on two different transitions then we can compare them there's technology for doing this is a rather tricky thing but there is technology to do that extremely precisely frequency comparisons are one of the easiest things to do in a lab for reasons that are somewhat technical but are extremely good these days so we can do that and try and look for this signal then the question is ok what kind of precision can we actually expect if we're going to do this so ok from all this we had that our probability in one of these in one of these interferometry sequences of getting a transition goes as delta e times t so if we have some setup where in the usual case we wouldn't have got any transition we've tuned that all back so we've got a sensitivity to extraneous influences of one part in about 10 to 15 if we're running this for a second or so which is about what you do so we can tell a phi zero over a lambda thing of about 10 to the minus 15 from a single such thing and if we do it n times in the usual way we can get some improved sensitivity going as root n so about 10 to the minus 15 over the number of things we do if we do this every second for a year this gives us something a few times 10 to the minus 19 sensitivity so a potential for extremely sensitive fractional changes on these things so what kind of values will this parameter actually have so we've got the dark matter density goes as m phi squared phi zero squared locally as I assume Tracy has talked about a bit this is around a gv per centimeter cubed few kilopar sex around us in the galaxy and if we assume that as in these models is very much expected the dark matter is pretty smooth then the value around the earth should be this as well so that means that our phi zero value should be the square root of this over the mass so this means that if we evaluate our phi zero over lambda value then this gives us putting in some numbers so this is our delta row over m phi lambda so if we take lambda to be of all the plank scale just a comparison and we take m phi to correspond to a oscillation of on the scale of minutes which is the kind of thing that we could easily see if we were doing some kind of clock comparison over m phi then the number we get is about 10 to the minus 14 so that is very encouraging so what that's telling us is that we definitely for very low mass things as we're going to talk about in the next lecture more than gravitational strength interactions are very dangerous because there are very sensitive tests you can do that will actually show up the effects of those whether they're dark matter or not but this kind of estimate is telling us that even if these things are significantly more weakly coupled to us through these kind of operators than gravity is so they stand a chance of evading the non dark matter bounds then these kind of tests could still have orders of magnitude of sensitivity reach to see them so these kind of comparisons are being done between extremely precise clocks of different kinds and are starting to put constraints on these kind of models now of course there are many experimental issues which this completely glosses over the zero order picture you need to be extremely careful that no other external influences come in and mess up the evolution of your superposition and you also need to be sure that you know exactly what initial state you're starting if your atom has some additional motion then that will lead to redshift which can swamp this if there are some additional magnetic fields then that can lead to things screwing up so it's extremely difficult so in an axion model lambda would be of order of decay constant or with some constants out front in other models this will be the scale of new physics of some other form you might not call it a decay constant but yes this is generically going to be at least the new physics scale maybe with some extra suppressions as well so that's for axion like couplings so this is kind of leaping ahead but as we'll talk about tomorrow couplings of this kind all couplings to like fermion masses or whatever are actually subject to much much stronger constraints for low mass particles unlike an axion they will show up in things like equivalence principle tests which we'll talk about and so they can't actually exist even if they're not dark matter in a spectrum at low masses unless their couplings are extremely sub-gravitational so that's the difference between axion like couplings and others which aren't for example shift symmetric yes so you might in terms of the UV standard model but say the feed through into the electron mass would have your carwism things in there will be additional parameters which can very easily make these small enough to be okay so okay so that's a sort of extremely high level tour of one of the ways in which extremely precise experiments can be used to try and look for low mass dark matter so another kind of example points you towards somewhat different techniques is the one that Tracy gave you a very nice introduction to earlier which is the QCD axion so there you have a somewhat different scenario in that you have basically only one free parameter you have that your axion couples to the gluon field strength so the gluon in the standard model and that implies due to the anomaly that the axion gets a mass which is of order being even more rough than the previous lecture lambda QCD squared over the decay constant FA and then misalignment production we no longer have two free parameters we only have the one free parameter implies that we get the correct dark matter abundance so if the initial theta angle was one the correct FA that we need is about 10 to the 11 GV which corresponds to an axion mass somewhere around 10 to the minus 4 EV now of course like Tracy was saying if we allow somewhat different values for the initial angle of this thing we can go to higher FA smaller mass and be okay if you look at the production mechanism through post inflationary symmetry breaking you can go to higher masses things will still be okay but we're still in the regime where we only have one parameter the mass or the coupling depending on how you want to put it and there is a natural sort of target region in that mass place which is in particular corresponds to frequencies of around 10 GHz so we're looking at frequencies much much higher than the kind of things you can get for like the clock experiments or the measurement generally measurements of fundamental constants those are done with long time scales like seconds or so to get the highest possible precision here the thing is also taking a gigahertz so in these kind of experiments the effect will completely cancel out so you need to use very different kind of techniques to see this so now ideally we just want to use this coupling this is the thing that defines the QCD axion sorry I should say QCD axion and this is the thing that's guaranteed to be there but looking at nuclei at gigahertz frequencies is somewhat awkward the sort of natural frequency corresponding to nuclear spins so we have the nuclear magnetic moment which is of order E over 2 times the mass of the nucleon and even in the biggest B fields that we can produce of order Tesla this is about 3 times 10 to the minus 8 EV so much much lower frequencies than this so there's a bit of a frequency mismatch of nuclear spins and nuclear excitations themselves are much higher energies so doing things with nuclear gigahertz frequencies is kind of awkward so instead it makes sense to try and look at the other kind of other couplings that the QCD axion might have so we've got the this term writing it out properly we've got factors of alpha s over 4pi in the usual definition etc g tilde the other thing that you naturally get coming along for the ride is a coupling to the photon through loops so you have this c factor here and then you have the f f tilde where this is the Maxwell field strength tensor so this kind of thing in the low energy standard model is naturally around 10 to the minus 3 just from effectively integrating it out pions the fact that the QCD matter is also charged under EM means that you naturally get a value somewhere around there it can be tuned to be smaller if you add in additional contributions from high scales, but naturally it comes out around that and you also somewhat more model dependently have couplings to fermions they've got couplings to the various standard model fermions through pseudo scalar operator so the thing that is sort of most natural and most generic at gigahertz these frequencies is to try and look for it the QCD action to its coupling to photons now so what what does this coupling look like and what are its effects if we expand it out in terms of electric and magnetic fields the photon coupling so our F mu nu F tilde mu nu is equal to minus for A E dot B where these are just the usual E and B fields now just an aside this form is perhaps somewhat worrying because it looks as though you actually got some term that depends on the axion value itself not just on its derivative and wouldn't that then contribute to the massive things but the answer is no because as for the QCD thing this is a total derivative so our F F tilde is for expanded out D mu A nu F tilde mu nu where I've ignored constants and things so we can integrate by parts in the Lagrangian and transfer this derivative to the A field so it is still a shift symmetric coupling now the same is true of the QCD thing but there we have topological structure which means that doing that integration by parts encounters issues of infinity for standard model electromagnetism there are no such issues and for constant value of the A field this has no effect so everything is okay there this doesn't give any extra weirdness okay but what does it do experimentally what this means is that we have some if we have a low velocity axion field it looks like some effective current density J axion here which is going to be set by our we'll call it G which is our C over F A as defined down here times our DT of our A field times the magnetic field so if we have a big magnetic background magnetic field then the time derivative of the axion will correspond to the most effective current density and this oscillating current density can drive source photons which we can then try and look for okay so how might we try and do that the most naive thing you might try and do turns out to be pretty much the best thing to do and the thing that is currently the most sensitive way of looking for QCD axions in the gigahertz mass range so we make some cavity where the length scale of this cavity is of order the inverse mass scale of our axion so if it's gigahertz is range then we're looking at something which is of order tens of centimetres or so and we have a big background magnetic field inside this cavity in diagram terms this coupling corresponds to an axion coupling to two photons so one of these photons will take from the magnetic field and then this axion current will be able to drive the creation of photons inside the cavity so what's the rate of this and can we hope to see it so the interaction Hamiltonian that we have for our system so integrate over the volume we've got our G B dot it with our E field and so we'll take the B to be our big background B field so we've got this so in the presence of an oscillating axion field then the power that is being delivered from the axion to the cavity is set by the time derivative of the forcing so GDTA B0 dot whatever the field in the cavity are what power is being exchanged between the axion field and the things in the cavity the power that we're losing from the cavity is just set by the quality factor that's what we mean by the quality factor of the cavity a high quality factor cavity is something with low losses so the power that we lose is the energy stored in the cavity times by the frequency divided by the quality factor so schematically this is of order of the E field in the cavity squared times the volume as a frequency divided by the quality factor ok so if we've reached some equilibrium if we've left it long enough for the axion to ring up the oscillation in the cavity then we can equate these two things so parametrically this input power was of order the volume times G times the frequency times the axion amplitude times the B field times the E field ok so P in of order P loss implies that the E field in our cavity is of order the quality factor times our G coupling times our axion amplitude times our B field and so after all of this stuff the power which we're getting from our dark matter signal is set by the combination of the coupling the amplitude and the background B field times the volume of our cavity times the frequency which is just the axion mass here times the quality factor of our thing ok so we've got the sort of parametrics of it what does this actually come out to for the QCD axion numbers that we care about so plugging things in we've got that from before we had that our G was approximately 10 to the minus 3 over FA in sort of untuned things 10 to the minus 3 slash 10 to the minus 4 so depending and we had that our rho axion was M axion squared parametrically which was our local dark matter density of around Gv per centimeter cubed and we have our axion mass in terms of the coupling so putting all that together the power that we get out is 5 times 10 to the minus 21 watt if we have a B field of Tesla-ish strength which is like a big but entirely doable B field if we have a quality factor of the cavity of a million or so again big but doable volume of around meter cubed and an FA of 10 to the 11 Gv-ish which is around where we're interested in ok so that sounds small but how big is it if we put it in terms of the mass of the axion which is also the energy of the photons that we're getting out because we only have one frequency in the problem then this comes out to 500 hertz or so with all of this list of parameters so we see that we have 500 signal photons per second which is not a crazy number now of course it means that you need to cool your cavity down a lot such that black body photons are not coming in at this rate it means that you need to be very careful your amplifiers are not injecting more noise than this etc but if you have a quant if you have a set up which doesn't have any extra limitations which you can almost realise now at the microwave level you have almost quantum limited amplifiers can cool down such that your black body photons are almost not there they're a very very low number so this is doable and indeed the ADMX experiment which does pretty much exactly this so that has some frequency so a MA over 2pi around a gigahertz is where it's searching for corresponding to FA of around 10 to the 12 GV it's sensitive to powers of around 10 to the minus 20 well depending on where exactly you are 10 to the minus 22 watts so it is able to take out a decent chunk of the QCD axion parameter space range now of course of skimdover you need to scan oh you need to change the properties of your cavity such that you're resonant at the right frequency etc you need to scan over all the different configurations but in this kind of frequency range getting to QCD axion sensitivity is doable and is being done we plot out so this is mass of the axion and this is our coupling the QCD axion band is some sort of untuned space which is so we have the usual MA proportional to 1 over FA relationship so G is proportional to 1 over FA there's some band in which the coupling to photons might plausibly be given a particular FA and then which I think Matt will talk about a bit more in his lectures you have bounds which don't depend on the axion being dark matter so if the coupling to photons was too big then you'd be ruled out by observations of stars and various other things so at frequencies of gigahertz or so ADMX has taken out some chunk of the parameter space in which QCD axions could be dark matter with this photon coupling so the exploration has actually reached a very interesting regime now of course the mass that we're looking for wasn't set in stone there is some uncertainty even with the misalignment mechanism there's uncertainty as to what initial theta value would be there and like Trace was saying there's very definitely uncertainty as to if you go beyond that and you're looking at production after inflation then everything becomes more complicated and you could live over a wider mass range above that so at lower mass ranges this would be tuned misalignment though the tuning is not that severe and there are various you might make anthropic arguments as to why this tuning shouldn't worry you and up in the higher parameter space regime higher mass regime post inflation reproduction may well be able to take you up to higher masses ok so this is ADMX etc is extremely interesting but it would be very nice to explore the rest of the parameter space regime and see whether we can either rule out or find use the axion dark matter there so there are a lot of experiments there are a number of experiments of concepts down in the lower mass region some of which even attempt to look for this coupling which is guaranteed to be there but those are of a rather different kind of nature and I won't have time to talk about them here so one thing that I will talk about a bit is how might we try and extend this reach upwards in mass how about we try and look for QCD axions at masses above the few gigahertz range so there we'll try and look at the photon coupling again because that's the most convenient thing to look for at high frequencies the parametric sort of power calculations that we did before still hold and because we have this FA is proportional to MA if we do this over MA calculation this cancels and it turns out to be for the parameters as before B field of a Tesla volume of a meter cubed etc etc you get that this is 500 hertz for whatever axion mass you're looking at so as you go up in masses you're still getting the same number of photons per second out of your hypothetical apparatus so it's certainly not the scaling is such that you could certainly hope to see decent signals here if you're seeing signals at ADMX however there is a difficulty and that difficulty is effectively momentum conservation so in the case of ADMX we only worry about matching the frequency of the axion to the frequency of the cavity that we're in the cavity mode just has that frequency because it's a low line mode of the cavity and the cavity is the right geometry but if we have a volume of a meter say a meter or so and we have a frequency of much larger than gigahertz then the wavelength of photons that we'll be trying to that we'd convert from axions into photons will be much smaller than a meter so we've got that our lambda much much less than a meter and that's an issue so if you look throughout the dispersion relation of things then in a big volume the dispersion relation of photons is basically that they live on the light cone so they have omega approximately equal to k but axions are norm relativistic we've got that they have an energy approximately set by their mass their velocity is of order 10 to the minus 3 or so so the range of possible axion momenta is rather restricted it's only things which are almost just costed around here so if we want to convert to a photon at the same energy then a photon at the same energy has a momentum that looks like this so we have this difference to make up if we drew it out in a proper 3D sense we'd have our light cone which is actually a cone now and then we'd have our axion mass shell so we'd want to convert from there to here axion momentum photon momentum so P gamma, P axion so we've got this mismatch set by G so what this is telling us so P gamma, P axion so we've got some momentum which we've lost here of order the axion mass scale so we need to make this up somehow and the easiest way to do that is to have some structure of your target as a scale lambda which is order MA inverse the most naive way to do that would be to just say ok well I can just make tons of little cavities I can make a whole load of them each about wavelength size and run all the leads of them to the same amplifier so we just take it all out run a load of wires back and forth and do that theoretically that would work but that would be extremely awkward experimentally if you're trying to fill a meter with cavities of like millimetre size or even smaller then all the wiring and the construction becomes an absolute mess so what can you try to do so one answer to that is instead of using cavities is to use some kind of structured material so what in the optical range are called photonic materials and in the very simplest form these things are just periodic structures where let's say we have a load of discs each of these discs have different refractive indices so refractive indices N1 N2, N1, N2 etc now in a medium which is effectively periodic you don't have the usual so in free space you've got that em waves just look like harmonic functions like e to the so e of x would be e to the ik dot x it'll just look like signs and cosines in a periodic medium you have instead block waves you've got your harmonic part but then also multiplied by some function which is periodic with a period of your material so let's look at an example let's say we have an effectively one dimensional medium and let's take this example here we have some part of space which has refractive index N1 refractive index N2 now in the first part this is basically just like free space so we have a sine wave type behaviour there say that this is the value of the e-field then at the interface this is say 0 and this is continuous here at 0 and the derivative has also got to be continuous if the material isn't magnetically active so we've got to have continuity of this derivative so it just goes through like that now in this space it's also part of a sine wave and it's got to get to 0 here so to match that up it must be just a scale down version of this one where the ratio of these is N1 to N2 so we have a periodic function instead of looking like a pure sine wave in free space which would be just like this would be x this would be the e-field and this would be some different structure now okay how does that help us the problem of momentum mismatch can be rephrased as saying that if we look at our interaction Hamiltonian so it was equal to the volume integral of our gae.b so if we have an effectively constant b-field, let's say our b-field is just in the z-direction or whatever then in the free space case the e-field in this part is in the upward direction the e-field in this part is in the downward direction so each period cancels in its contribution to the overall interaction Hamiltonian so free space is approximately to 0 because you get cancellation across all the different periods however in the case of the example where you've modified it to some periodic material then this part has a higher amplitude than this part so integrating across a single period we only get a small we only get some non-canceling component so periodic it turns out that what we get goes as 1 over m1 minus 1 over n2 so if we have an order 1 refractive index contrast between our two different regimes then we get a non-canceling part here and the overall interaction is again just scaling with volume we get order 1 overlap so we can then plug everything back into here the parameters of this are just that it goes as the volume times the average field times b0 and we can get back to this kind of 500 photons per second value that we were getting just from a cavity so this kind of thing is something that's being pursued across multiple different mass ranges so in the regime just above ADMX so at tens of gigahertz after hundreds of gigahertz the Mad Max experiment is a proposal to do exactly this using a series of dielectric disks so you just take your you take a sort of shielded volume you put a load of disks like this you put a mirror at one end and then in a presence of magnetic field which is perpendicular to the disks so you've got a b0 in this direction then a background action oscillation will cause photons to be emitted which you'll then collect onto a receiver and try and see them so this is one way of trying to get around this limitation of momentum conservation at higher frequencies there are various materials which can be fabricated so yeah so the deviation from that that you'll allow will set effectively the effective quality factor of the system that you're making so if you have things which are spaced out so yes if you're going to try and make a narrower frequency response than the dispersion due to the axion velocity then that is a difficult in these cases and b not always helpful because you're missing out on some of the power there are circumstances where you want to do it but yes that sets a natural limit to how sharp you want to make your frequency response because otherwise the axion will be frequencies outside where your experiment responds and you'll be missing out on the power that we're coming from there so yes that sets a natural cut off in many cases was that the yes exactly so for ADMX the lowest line cavity modes have an e-field profile where it looks something like this so if we look at it so this is like e mixing up spatial dimensions and e-field dimensions somewhat the e-field will vary by say one half a period so in that case the overlap as in if we take e dot b integrate across the whole volume will be order one of average e-field dot average b-field if we take a higher order mode of that cavity the e-field will look like so the integral of that against the constant e-field will be very much smaller than average e-field dot average b-field whereas if it has this form instead then it will again be back to small suppression average e-field dot average b-field so that's where that's coming from okay so yeah so that is just a demonstration of one technique which you can try to use to push up the search to higher frequencies and probe some of the range which is potentially also allowed from early universe production mechanisms like said there's also another story which is even more sort of various varied low frequencies because then you can more easily go after the couplings to nuclei and things but that would be somewhat of a separate discussion so yeah any questions on the axion detection side of things at this point okay well if not then move on a bit to talking about looking for dark matter scattering so yes sorry oh yes sorry I realise I didn't label the y axis that is unhelpful okay so like I said I think this will be talked about a bit in the BSM lectures but just to give a very brief preview of it so the FA that we're talking about here so we're talking about for ADMX FA of order 10 to the 12 GeV or something so we're looking at 10 to the minus say 10 to the 1 over 10 to the 15 GeV or so for our couplings of photons down here the constraints from astrophysics so from supernovae horizontal branch stars things like that come out at about 1 over 10 to the 10 GeV now they tend to work independent of the mass the basic point there is they're coming from processes such as you have high energy photons and high energy electrons say inside stars so we have a high energy photon come in a high energy electron come in they can interact through this kind of diagram so this is like gamma electron electron axion this is called premikov production and then the point is that you will have this axion because it interacts so weakly will just escape from the star entirely so it's like neutrinos in the sun if you have a neutrino produced in the hot solar core then we have the star, we've got the hot core we produce a neutrino there and it just makes its way out with very high probability it won't interact with anything else in the star that energy will just go off out into the universe in the same way if we produce an axion through this kind of interaction it will just again make its way through the rest of the star with very high probability not interact with anything and that energy is lost into the rest of the universe and the fact that you have this mechanism of energy to be lost from the core means that you have a change in the stellar structure there's a you can solve for what the pressure and temperature et cetera throughout the star should be and the fact that it's losing energy from the core means that it should look a bit different than our models predict and because our models match very well to the structure of the sun, the structure of other stars and to a lesser extent our understanding of supernovae we can put bounds on how much energy you can lose through axions basically it's got to be less than two neutrinos because otherwise everything screws up so the comparison is usually to neutrino loss so coming back to this this is telling us that as long as if the mass of the axion is much much less than the temperature of the core then in this process these things are coming in with energy approximately set by the temperature so the axion mass is unimportant and the temperature of the core of the massive stars that provide the best constraints is of order 10 kV whereas remember down here we're looking at 10 to the minus for 10 to the minus 5 ish EV so all of these masses are almost negligible from the point of view of the temperature inside the core of a star so you get these constraints don't care about this mass at all they hold down to small masses and way up to much higher masses so that's why they have this form of just a flat line here so yeah this sets the point below which you want to look if you're doing a dark matter experiment to have any chance of seeing it because these constraints apply whether your dark matter or you're not but for the QCD axion it also means that you're safe they only overlap with the actual line somewhere around a massive EV or so so QCD axion of mass significantly less than EV is not significantly constrained through the photon coupling generically it'll also have couplings to nuclei which means that at a mass less than around 50mEV it's safe but at higher masses supernovae will produce it too much but yeah this kind of transposibility is safe you can't really see it through production just independent of the dark matter abundance you want to look for the dark matter abundance in the lab or in other astrophysical systems to stand the chance of seeing it so the Mad Max experiment is looking at ADMX is around a gigahertz there are other cavity experiments which are basically ADMX high frequency the same kind of thing which go up to around sorry I'll make a bigger graph here so yeah frequency which I'll call new A so ADMX is somewhere around a gigahertz then ADMX high frequency which these days I think is called Haystack for political reasons or something is looking around the sort of few to 10 gigahertz regime so around 10 gigahertz then Mad Max is trying to look from around sort of tens of gigahertz to around hundreds of gigahertz where this is not a built experiment yet so we'll see where it actually ends up doing that but yes somewhere in the regime of a couple of orders of magnitude above ADMX now there are also experiments using some of which I'm involved in using the photonic material versions of this so instead of actually just like positioning the discs by hand you build some material that's naturally got some periodicity and those are easiest to do way up at higher frequencies so scales which correspond to sort of less than a reward in EV so just below the point at which the astrophysical balance is starting to come in now you might say okay there's a bit of a gap here so around sort of terahertz frequencies what's going on so photonic now the issue is that it's certainly motivated to look here it's this is above the misalignment production mechanism range but well just above that but it's certainly something you can easily imagine getting from various post-inflation reproduction mechanisms the issue is that detectors are a real problem at very high frequencies you use photon detectors of the kind that are in your camera or something of course much fancier versions but we know how to detect infrared or optical photons and microwave frequencies there are well developed technologies from things like radio astronomy but also much fancier things for quantum information so things which almost look like qubits and they're sensitive to even single photons around the gigahertz range so we have detectors which are basically single photon good at the microwave range and in the optical infrared range but in between sorry there's a real gap and basically this is set by the two technologies having different problems coming up from above you have the superconductors you start becoming going above the superconducting gap of materials so as microwaved you can have very low loss things involving superconductors but you're trying to extend that to sort of tarot to in the far into the tarohertz range and you start breaking Cooper pairs and things so your superconductors start becoming less of a good thing to work with because you've got all kinds of lossy stuff going on so all of the stuff around here is generally based on superconducting stuff and that becomes a problem coming from below you have the problem of effectively energy threshold so the usual way in which a photon detector up here will work will be something like say you excite an electron whole pair in a semiconductor or you will look at the temperature rise in a small superconductor just like that but all of those things require enough energy in your single photon in order to be able to see it and that basically runs out once you go below somewhere around 50mE or so it becomes too small such that you just all kinds of environmental crap is giving you the same kind of energy stuff and fitting this gap is a research program on multiple levels both because it would be extremely interesting for physics through things like dark matter experiments and whatnot but also because technologically lots of people are interested in things here people who are doing various sort of imaging type things so like the things that you stand like that at the airport things which are trying to look at various biological processes I think so there is strong motivation for people to try and build new detectors in this range and there are a number of ideas a lot of them based on either converting it to this range where you can see it or down converting it to this range where you can use the existing technologies so yeah eventually unless we find something in either the lower mass range or the higher mass range we will want to try and explore this sort of gap in the middle and that's a technological challenge but something that people are actively working on so we will see okay actually yeah I probably don't have time to start on a sort of whole new topic there are there any other questions on anything so far okay oh yeah so the Q factor is actually quite easy there because if you have a given configuration you can always make the Q higher by taking let's say we take our stack of material if we have the stack of bare material just like this then the Q would be of order one over the number of periods so a 100 period thing will give you Q of 100 sorry no not one over number of periods so the loss goes as one over the number of periods so the quality factor goes like that however if you want to enhance quality factor all you do is you stick it in a sort of cavity you stick it between two slightly reflect, slightly transmissive mirrors so it's very easy to take one of your things and up the Q and why this is not something you'd always want to do is that higher Q means that if we draw out in axion mass so this is the mass of our axion this is the power in our signal if we have say 100 periods then we'd have some thing that looked like this we'd have a fractional mass range of order one over 100 in which we are sensitive and the power that we get there is the Q boost is by a factor of 100 over some for usual P0 if we go to a Q of 10 to the 5 also by putting it inside a cavity then we boost this something very, something 10 to the 5 also but we also restrict the range over it works to be 10 to the minus 5 so we'd need to change our configuration this we'd need 100 steps throughout if we're going to do it this way this one we need 10 to the 5 so the overall number of signal photons if we don't know the dark matter mass and we have to step through the entire range is the same in both cases now it can still be beneficial to use high Q the reason why things like ADMX do it is that all of your power is then coming in a single configuration whereas if you have noise photons then those you see in all of the different things so you're enhancing the signal to noise in the configuration where you're actually tuned correctly by going to high Q that's not necessarily such a problem at the higher mass range because you can make extremely good single photon detectors with extremely low noise and of course black body radiation isn't so much of an issue there so depending on what your noise properties are you have a good idea to go to high Q at low frequencies noise is still enough for an issue that high Q is a really good idea so that's sort of there's a trade off there and whether or not it makes sense to go to high Q or to lower Q depends on what your noise issues are yes pretty much so it's just that we can look at so our Q remember so our power out something that's 1 over Q so it's the energy stored system times the frequency divided by Q the energy stored is E squared times V V is proportional to the number of layers and the power out is proportional to the area of the end layer because power only comes out by coming out of the ends so the energy divided by so the power out is 1 over N times the energy in the whole thing does that make sense so yeah that's effective what's going on there and if you have say some kind of half-silver mirror at the end then it's whatever the reflectivity of that thing is so it's very easy to alter the Q just by effectively making a crappy Fabio for a cavity or something like that which is what you do for example if you saw a tentative signal if you saw that then you could try and zoom in on the mass range in which you saw it by making a reflective cavity and trying to narrow down your thing all the way to 10 to the 6 or whatever set by the action on line with and then you'd be able to tune and say can we see this behaving as it would do we see the signal power going up as we tune our thing more and more so that's a handle which is nice to have ADMX's case of course they do a similar kind of thing if they see a tentative signal they tune back to there and see if it persists and behaves as you'd expect yeah okay they do it in a way that's sort of extremely so they have a copper rod sticking there on a sort of rotating thing they rotate this and the copper rod moves position so they're sticking like a little thing into the cavity and literally changing the gross shape of the cavity in little steps it's a simplest thing you can do and it works okay if not then well thank you very much sure ADMX takes a few seconds apparently yes so thank you very much and see you tomorrow