 Hello, everyone. Thanks Joel for the introduction. I'm Javier Redondo and I have to say that I'm not at my best, but it's really delightful to be here on air, to talk and to present a little bit this issue of Action Dark Matter and a possible experiment that will perhaps in the future be able to find axioms and find dark matter. So let me just start. So the title of my talk is MapMax, a quest for axioms, a quest for dark matter. MapMax is the name of the experiment that we are setting up in Munich, but now it will have collaborators from other institutions, not only the Max Planck Institute for Physics in Munich. So now instead of what originally was just called the Munich Action Dark Matter experiment, hence MapMax, now it would be more like magnetized Action Dark Matter experiment because it's no longer only Munich. So here we are and in the screen you see that this is an experiment that has definitely a lot to say with many mirrors, but it will only come at the very end of the talk. So let me just start with a little introduction on axioms and Action Dark Matter and a scenario of Action Dark Matter which I am very interested about. So you might know that in the standard model there are two sources of CP violation. One is the complex value in the CKM matrix and the other is related to the theta angle of QCV. There is an angle that defines what specific vacuum, among the infinite vacuums that one could try for quantum carbon dynamics. And this theta angle is the angle that controls this CP violation, but it is also, it is to generate with another phase of the CKM, well, let's say another phase of the UCAVA matrix of the QCV. UCAVA interactions with ferments. So probably the second source of CP violation we have not heard very much about and it is because when one tries to go and measure CP violation in purely hydronic observables, like electric type of moments of different nuclei or hydrons or nuclei, yeah, strongly interacting beasts. So one finds that actually there has never been actually any CP violation observed in these systems. So actually the strong interactions they behave just like they wouldn't have any kind of CP violation. So this angle is actually measured, our work is just bound by the most exquisite experiments to be smaller than 10 to minus 10. And the smallness of this value, which is by the way, if it would be of the order of 10 to the minus 10, it wouldn't be the smallest parameter in the standard model because the UCAVAs of the neutrinos or at least of the lightest neutrinos are for sure smaller than these 10 to the minus 10. But actually you have not measured this thing. And so it could be much smaller than 10 to the minus 10. And for us, the smallness of the UCAVAs of the neutrinos is something that is certainly very unpleasant. That's why we invented the CISO mechanism, right, to have UCAVAs of order one and the effective UCAVAs of much smaller just because there's a suppression by a high energy scale. So yeah, this is what we have in nature, a value of theta very close to zero. So in principle, there's no problem of having this. But actually, Roberto Pecce and Helen Quinn in the seven in the beginning of the end of the 77 or something like this, they realized that actually this theta equals zero point is something, well, it's a very special point, because if one computes the energy density of the QCD vacuum as a function of theta, one realizes that actually the point where the energy is minimum corresponds precisely to theta. So actually, they had then this idea. So what happens if theta is in reality, not a constant of nature of the standard model, but it's actually a dynamical variable, a field that can depend on time and position, of course. So then they immediately realized that actually this would solve a strong CP problem because the theta, the theta field will with time, we go to the minimum energy position, which is theta equals zero. So that's the idea of the of the action. Of course, whenever you have now a dynamical field with a potential and a minimum excitations around the minimum of the potential, they are just nothing, nothing but particles and these particles will do the axioms. So and then the the idea of action that matter is super intimately related with this mechanism. Essentially, I've already told you everything. So the idea is that now whatever the initial conditions of the action field in the big bang in the beginning of time, regardless of the initial conditions that the action take at these initial times. So as time progresses, the action field will go to the bottom of the potential and then we'll come back and then perform oscillations. And because the universe is expanding these oscillations will just dump in so that the amplitude gets decreased. And so with this, we can get a very small value of theta. But however, the prediction is that since the life sum of the universe is finite, there will be always some oscillations, some residual oscillations of the action field. So that in these oscillations, there's some energy and actually this energy behaves essentially like dark matter. So there will be always some dark matter. So in this slide, you have this picture of the dark matter and you have actually that something that I think is very, very important that this theory of axioms is really economical in the sense that in principle, one only needs one parameter because theta is an angle and it cannot be, well, if you want to write a kinetic term for theta, you definitely have to include one energy scale because a kinetic term involves fields with energy dimension equals one. So this field with energy dimension equals one would be what we call the action field and this fA is a new energy scale. And yeah, because this picture we know more or less very well with this potential, but this potential we know as a function of theta. And then what we know is as a function of A, right, because it depends on this scale fA. So in particular, we can compute this second derivative as a function of theta. So the action will have a mass, of course, and the mass of the action is, of course, going to be inversely proportional to this fA, okay? Very good. So one can study very, very relatively easy what is a dark matter gene as a function of the action mass, which is what you see in this figure. And this is relatively, or this is relatively surprising. First of all, because the action has something to do with QCD, essentially it has the same quantum numbers than the theta angle, just which are the same quantum numbers than the eta or eta prime if you want. So the action mixes with eta prime and it gets interactions with all the standard models just like eta prime would have. In particular, it gets an interaction with two photons that allows this action to decay. But also through the mixing with the eta prime, the action becomes mixed with the pions. And so the action can interact with pions. And in the early universe, you can just from pions produce axioms. So then what you the first line that I want to just highlight to you here is this thermal abundance of axioms as a function of the mass. So it increases just like just like the Lee Weinberg curve for neutrinos. It increases that at some point it decreases and it's because this action decay starts to be too important. So then with the thermal abundance, we never make it to the relic density today, okay, which is here expressed in funny units, funny physical units like killer, killer electron ball per centimeter. So okay, however, if you go much smaller in mass, you see that there are different lines that go up here. And these are the lines due to a non-thermal production of axiom that matter, which is exactly what I was just telling you a bit before. So the idea is that so that these very small masses, the axioms are so weak interacting that actually the only thing that happens to them or to the action field is that it evolves in its potential. So there's no important interactions that will change the number that will change the action field. It will only be a relaxation towards a minimum of its potential. And here you can more or less understand what happens. But the most important thing here is that the final yield, as you can see here, first of all, the final yield decreases with an increase in mass. It is very easy to understand. And it's because, so I told you that, so the dynamics, they are involved, they involve this new energy scale, okay? So as a function of this energy scale, if this energy scale is very large, so this potential will become extremely flat, right? So the dynamical time scales for this action field to go to the minimum will be much longer. Okay? So it will take much longer for the action field to go to the minimum. And this means that the energy density that it has at the beginning, it will last for a much longer time. So the residual oscillations today will be larger. So that's the reason why you go to higher masses, which means smaller values of the you get smaller dark matter density. But you have here something more, you have different lines, not only the slope is important. And this helps me to highlight something which is very uncomfortable, but it is what it is. In this realignment mechanism, it is essential that the action field is not disturbed by its interactions with the rest of the world. And then this means that the initial conditions, they determine the final action yield. So this is not like WIMPs that once they thermalized everything, once they thermalized, there's no there's no dependence on the initial conditions. Here, the initial conditions are important until the end, okay? And there are two classes of very important scenarios, two classes of very important scenarios. Once, which depends essentially on the type of initial conditions, one has a random initial condition in different parts of the universe, which are constantly disconnected, okay? So this is what happens if the action just appears as an angle after a phase transition. And for instance, it will lead to something like this, okay? So this here you have in a two-dimensional slice of the universe after a phase transition in which the action field takes values, okay? I've written it as a function of theta. And here you have, you see that there are different patches of the universe where the action field, for instance, is already zero, which is black, but some other places around pi, which is white. And of course, some intermediate values. So if nothing happens after this big trip, of course, these regions in which the action field is more or less more genius are of the order of the size of the horizon at that time, because the information cannot propagate at longer distances. But of course, as the size of the horizon expands, okay, as the universe expands, as the light propagates essentially, these regions will tend to be, of course, larger and larger. But today, well, but today, these regions are still, okay, so these regions increase at the speed of light until the actions become massive. So the action mass becomes effective. And then they just essentially contain. So the typical dark matter density in this scenario, it is a result of these stochastic random initial conditions. And it will have a coherence length, which is related to the size of the horizon when the action mass is important. And this is a very small distance compared with the universe today. So we will have very strongly in homogeneous dark matter at very small scales. Okay. However, there's another possibility, which is that inflation takes takes place after this moment. So the action, the action field takes its random initial conditions after a phase transition, for instance, but then inflation happens. And then one of these patches in which the universe was more or less homogeneous gets blown up to cover our entire universe. And then in this case, we have no prediction for what the dark matter of the universe will be because our universe could have been any of those small patches could have, we could have started with a very small amplitude or with a very large amplitude, and all these things will happen at the same time. So we lose the predictability. Okay. So essentially, in the first scenario, one can just make the numbers. And one finds that essentially, with a relatively large uncertainty, because we cannot do these numerical simulations very well at the moment, one gets that the full amount of dark matter will be obtained if the action mass is something like 100 microelectron volts. Okay. 10 to the minus 4 electron volts. If the action mass is larger, then we will not have enough dark matter. And if the action mass is smaller, we will have too much. So this is exclude. And yeah, there are other arguments to exclude axioms with a mass of something like 10 to the sort of 20 times 10 to the minus 2. So these are astrophysic, astrophysical bonds. This is the scenario that I'm mostly interested in, because we have a prediction. Okay. We have the prediction that actually dark matter should be here at 100 microelectron volts. On the other hand, if in this second scenario that I told you, in which inflation happens afterwards and selects one value of initial conditions, then we can just have the full amount of dark matter for any value below essentially 1 millilectron volt mass. Okay. The only thing that happens is that for very small masses where we would have a lot of dark matter, typically, you have to tune the initial condition to zero. And for very small masses, you have to tune the initial condition very close to pi. But in principle, you can have them in, you see for a very wide range of action masses. Very well. So these were axioms and axiom dark matter in a nutshell. So let us try to see how can we detect these residual oscillations of the axiom field. Okay. So the idea is very simple. So now that we have the dark matter as a class of 4 of oscillations of a new field that you don't know very much about, these oscillations, they have to account for this 0.3 GB per cubic centimeter of the local density of dark matter that we think we are immersed in. And the energy density in an oscillating field, non-relativistic field is essentially the energy density of a harmonic oscillator, the one half of the time derivative square, one half of the potential, which is one half of the mass squared multiplied by the axiom by the field. And we have harmonic oscillations around the minimum. So let me just think that the axion is oscillating in time with a frequency which is given by its mass. So that actually these two terms are equivalent. The energy is partitions between kinetic and potential. And essentially if you rewrite the axiom field as theta times FA, then you find this MA squared times FA squared. And this is nothing but the second derivative of the QCD potential at zero. And this quantity that we can compute, this is model independent, comes from QCD alone, even if it involves two parameters of the axiom field. So the only thing that we have, so knowing that energy density around the earth, we can obtain the amplitude of the oscillations of the axiom field. And this, they turn to be of the order of 10 to the minus 19. So you see even if, so even if the, this is a very small number, and certainly it's much smaller than the upper limit that we have for the electric, so for CPU violation observable, it's like an electric terrible moment of the neutron, which is something like 10 to the minus 10. Even with 10 to the minus 19, we can just get enough energy to account for all the dark matter around us. Very interesting. So how can we try to measure the oscillations of the axiom field at the level of 10 to the minus 19? Nine orders of magnitude smaller than the sensitivity to detect nuclear electric terrible moments. And the answer is we can do this because now the electric terrible moment of any effect that theta will produce is now going to oscillate in time. And it's going to oscillate in time with a very nice coherence length. The axiom field is very coherent. So we will be able to use some kind of resonant effect. So here in this slide, you have some ideas that people had have, people had, people had to search for axiom dark matter. The most important is using resonant cavities, but we also have mirrors, LC circuits, spin procession, atomic transitions, and optical. I'm going to just show you in the next slide more or less. Since all these experiments use more or less some resonant effect, they all are very sensitive to the exact actual mass, which is the frequency with which the axiom field oscillates. They are somehow very good in one mass range, but very bad in the others. So they are somehow scattered in a better space because the oscillation electric terrible moments or NMR techniques, these are LC circuits, these are the resonant cavities, around 10 to minus five is this is the best spot. They are also not dark matter experiments that search for axioms in different environments. And okay, and as you see this, this is the region I want to cover, and it's only covered a little bit by, will be only covered a little bit by these cavity experiments. So Mad Max is an experiment that I'll tell you a little bit about now that we'll cover, that we'll try to cover this parameter space. So the idea of of detecting axiom dark matter with these cavities and Mad Max is the same. And it relies on the fact that the axiom couples to two photons as I have already told you. This is at least because of the mixing of the axiom with the eta prime and through the eta prime to all the scalar methods. If you want, you can write this term as an interaction term between two photons, which is B dot E. Okay, the axiom always divided by FA. And then you'll have this relative correction that comes because this, this interaction comes from a triangle diagram, and then a model, a model dependent coefficient of other one. This is of course the axiom field. And here we know that now this guy is oscillating. So the idea is very simple. If the axiom field is now oscillating, it acts like a source. The idea is that if you put a very strong magnetic field that will be also a part of the source, then this interaction term, this Lagrangian interaction term will become just like a source, will become a source for electric fields. Okay. So, and you can read essentially from this formula the size of the electric field that it will be sourced by this oscillating source term, which is essentially the value of this current. And yeah, okay. So I've just here, I've used that the theta of t, I can write it as theta zero times the cosine of the mass multiplied by tan because this is the frequency at which the axiom field oscillates. Very good. So let us continue. Let me just tell you about the resonant cavity experiment, which are the only ones that are taking data at this moment. And they have been the most important experiment so far. The idea is that once you put a magnetic field and the axiom field is oscillating, this generates an electric field. So the idea is that this electric field that is oscillating also in turn can now be amplified inside of a resonant cavity. Okay. In such a way that the output power, instead of being only proportional to the electric field squared, is now proportional to the electric field squared times the quality factor of the cavity that can be as large as 10 to the 5. And then, because this is a resonant cavity, this is a cavity, then you have another factors which use this geometric and coupling factors as a part of one. But the important thing in this experiment is that the power that one can extract essentially is proportional to the volume of the cavity multiplied by the frequency of the axiom mass. This is the output power when the cavity is resonant at the frequency given by the axiom mass. Okay. And unfortunately we don't know exactly what is this mass. So what these experiments do is just to tune the cavity. They measure for a few seconds. Then if they don't find anything, they just retune the cavity to a different resonant frequency and they continue, they measure and they continue. So essentially what they do is just measure one frequency band at a time. Now, this experiment, you can see here the reach in this dependent parameter as a function of mass that different experiments have found. Essentially only at the smaller frequencies is where ADMX, Axiom Dark Matter experiment in Seattle now has been able to reach values of order one that correspond to a popular model. It's like we said, DfSC. These are in color. You have the prospects which are very optimistic. And you see that all the experiments that try to search for action masses which are much larger than a few 10 to the minus six electron volt, they actually had much worse sensitivity. And then explanation you find it here. So the idea is that first of all, the noise that you have to fight against the signal is proportional to the mass squared. And the signal is proportional to the volume. But in these cavities, the volume of the cavity is proportional to the wavelength of the frequency. So at higher frequencies, you have a smaller wavelength and then your volume decreases. So that actually power out is also inversely proportional to one per the mass squared. So that's why the sensitivity of the natural experiment, it drops so fast as you see. But okay, they're good ideas to improve all these things. But these values are still quite far from 10 to the minus four. That's why we think that instead of improving this experiment, we could just try to avoid, but we could just try to devise a different concept, experimental concept to fight against the fact that our power out will decrease like one over m squared as we increase the mass. So one experiment in this direction, one concept in this direction was put up by some colleagues and I, what we call the dish antenna experiment. The idea was very simple. So just imagine that this in a magnetic field, the action field is exciting, some electric field coherently. So then you just put a reflecting metallic dish and then this electric field will essentially reflect with or will induce currents you want in the in the dish and this dish will radiate and now if it fits spherical and the wavelengths of the light is much smaller than the diameter of the of the disk, there will not be very large diffraction and all the power will be concentrated in one point. I will tell you a little more about this production mechanism. Well, the idea is that in this case, the power is proportional to the electric field square that I told you before, this is a fixed electric field. Okay, it's not dependent on the dark, it's only dependent on the dark matter density, but not on the action mass. And it's proportional to the, this proportional to the electric field and proportional to the area. Now, and you have an area which is much, much larger than the inverse wavelengths, you can just compensate with a large area, the quality factor that these cavity experiments have. So that was the idea, but it turned out that it was not good enough. So let me just, so and the people, it was not good enough. So in the math max experiment, we have just improved a little bit this idea. And in order to understand this, we have to study a little bit more the emission of the emission that comes from this mirror. But instead of using a mirror, I'm going to use a general dielectric with a general dielectric constant. It can be also imaginary, so I can just use it for to describe metal. So the idea is that this electric field is inversely proportionate to the dielectric constant of the medium. So if we have two different media, the electric field induced by axioms will be suppressed inside of a medium with a large dielectric constant. Whether it's real or imaginary is not important. So this is much of electric fields is actually not a solution of Maxwell's equations that tells you that the electric field transfers to a surface, it has to be continuous. So the way that nature makes just taking into account that this field is oscillating. So the only thing that we can add to this solution of pre-solution of Maxwell's equations is just plane waves that go out of the surface, plane waves of the normal plane waves of electric field and magnetic field, electromagnetic waves of photons, such that the action field, sorry, such that the electric field is completely continuous at the interface. And this predicts that there will be some electromagnetic radiation, which is the electromagnetic radiation that we wanted to collect in this de-Chantana experiment. But now we know that we can have something which is more much more general than no only a de-Chantana. And we can try to use and the idea of the Mad Max experiment was not only to use one surface, but to use many of these surface. Because now we don't have to use a metallic disc here, we can just we can use the electrics that are somehow transparent to electromagnetic radiation, such that the actions will produce waves. But these waves will go through the next layers and so on. And they can even replace the right, replace the electrics at the same, at the right distance. We can even get a coherence enhancement of the signal, because the optical path is the same. So what we do to characterize the power emission in this in these discs is just to find this power per unit area normalized to the power that would be emitted from one disc. Okay, that is 2 times 10 to minus 27 watt per cubic meter in a magnetic field of 10 desks. That's eventually 10 to minus 27. So yeah, here you have an example of what happens if I put 1040 or 80 discs with an index of refraction of three. And I'm sorry, none of them. Let me just go back. So the important thing of this other than the working unit of this dielectric of this multi dielectric calloscope is this one dielectric slab. Okay, you have to imagine this, but you need two dimensions. It will be something like for instance, a disc. And the thickness of this dielectric, it defines a frequency. Okay, so one halfway length inside of this dielectric defines a frequency. And this frequency is the important one. This is the key frequency that tells you many things. For instance, the actual emission, what we call the boost factor is just the boost factors, the ratios of power of this thing compared with the metallic disc. For instance, you see here that if the frequency is just much smaller than this, the characteristic frequency that corresponds to half a wavelength, there's no emission. It's like essentially this thing is transparent and the action field doesn't emit anything. But very soon it goes up and it almost reaches one, which would be the metallic disc and would be the maximum emission first per unit surface. But here in this intermediate region, it's actually relatively large. So 0.8 and above for different indices of attraction. And here you have the transmission coefficient. So once this wave has been emitted, it has to go over other slabs of dielectric. So we want this, this is maximum and we want the transmission also to be maximum, or at least to be large enough. We don't want to have a zero. And you see that actually the transpecivities are always, of course, they are at zero, one, two, three, they are maximum of transpecivity because here I didn't put any absorption. But there's some dips of transpecivity when essentially when you have one quarter of wavelength inside of the dielectric. But in the sense that you see it is actually quite, quite large for this value of the electron. So once you put many of these disks together and then you get a boost factor curve. So once you, of course, you choose it, you choose the inter-distances. And then you can just look at the boost factor, which is the the boost factor is this beta parameter here. Essentially the power compared to it with the emission of one layer of metal. And yeah, you can see more or less that for 10, 40 and 80 layers essentially around the central frequency that you have designed for, you get an enhancement of the other, which is 10 of 40 or 80. Essentially n times the number of disks times the boost factor of one of one dielectric layer. And the power will be proportional to this number squared. So with 40 or 80, you can get boost factors of the order of 1000, which is fantastic. And this, so in this, in this experiment, the boost will come due to the large area. And it will come from the fact that you can also put many layers at the same time. So you can have two sources of boosts. There's something interesting also that when these layers are not transparent, you create something like resonance, small resonance. For instance, if I have, if I put here a mirror and only one dielectric disk, and I've set the distances such as this is a half wavelength and this is a quarter of wavelength, which corresponds to the maximum reflectivity, you have here that for instance, with an index of a fraction of one three, for which we have, we have a very bad reflectivity, but okay, so for the one, but it's not bad, we can get a boost factor that peaks at values, which are of order five or six, as you see. And this is to compare, to be compared with the emission of one disk, okay, or two disks if you want. So one can get a very mild boost for two, three if one gets these resonances inside the electrics. Well, the nice thing about these dielectric haloscopes is that, well, in principle, we could choose the thickness and the index of a fraction of these dielectrics, and then we can play with the distances between them to tune our experiment essentially, right? So for instance, we can just get boost factors, which are very peaked in frequency, okay, that will be boosting very much one frequency. If the action is here, it will give us a very large boost factor, but we can also adjust the distances in such a way that the boost factor is gives us a smaller boost, but on a much broader band, as you can see, for instance, here, okay, so that we can tune up more or less our scanning of experiments. So we can do scanning all these frequencies at the same time, and then just move it, as you can see here, so we can do all these frequencies at the same time, and then change again, and all these frequencies at the same time, etc., etc. But at some point, we detect a signal, and we want to specialise on that signal, we can just tune our experiment and to have a much larger boost factor, not of the order of 100, but of the order of 1000, okay, and with that, we could measure much, much better with a much larger precision. So with this technique, by the way, so that the theory of how we deal with the emission of these layers and how we tune these devices was just put up recently in a paper that was accepted in Jacob, and also we put up a letter that has been accepted in PRL, so it will appear soon. I'll write the reference for you in the slides. So this is the potential reach that we think we can have. So in this very busy plot, you find that the model-dependent action to photon coupling of order one as a function of the mass or FA, so these are two scales giving you the same order of frequency, this is just 2pi divided by this, and this is just this divided by 2pi. So you see here that what we think we can do with a direct heloscope of 80 disks measuring during five years, which is the typical lifespan of these experiments, or the one principle could run for longer, but more than five years, probably it calls for for the improvements in R&D. So this will consist on 80 dielectric disks with an index or fraction of five. This material exists, although it's very complicated, we don't know, we still don't know how to make them of the order of one square meter, we are working on that, and a magnetic field of 10 Tesla, which at the moment there are essentially no magnification of 10 Tesla that span one square meter. So all these things will have to work out, but they are doable I think. So one cannot claim that this is impossible, it's just never been done because there was no need. So with that, in five years we think we can just cover all this region at the level of the most important actual models that have values of seeing of order one, okay, and reaching even not only 100 microelectron volt, but a little bit above. Very good. So let me just summarize a little bit. So why we think that it can work? So the idea is that we can have a very large volume, which in this case means large area compared with cavity experiments. Also that the the Turing is relatively simple because it's just one one dimensional Turing, so the distance between plates. We can have long measurements in broadband configurations, or we can have short measurements in narrow width configurations, we can just adjust this to the time that, to the still unknown time of tuning one of setting the 80 dialectics because we don't know how long it will take. It might take 10 minutes, it might take one day to adjust the dialectics as we want. So with this flexibility we will be able to compensate. What can go wrong? So yeah, essentially magnetic fields that give us 10 tests like one square meter aperture that they do not exist. Also having one square meter dialectics, I mean these are, you just don't find them in shops, although we think that we can tile smaller pieces, but we are working on that. Tolerances, they actually look relatively good, but still we have to we have to keep errors at the, or let's say the thickness of the layers and so on at the order of 20-50 microns, which is very much doable, but it's challenging when one talks about one square meter layers and diffraction. Everything that I told you about has been developed in 1D because we are thinking that, because of the wavelength of the actual radiated electromagnetic waves is much, much smaller than the diameter. We know that diffraction is going to be small, but how small is something that we are just working on that in the lab and also its simulations. So let me just conclude. I think the strong CPU problem and dark matter motivate actions very nicely, even if it's, they are not a necessity, but they are a very beautiful solution of both problems. And the most predictive model has this 400, most predictive because you cannot, so the initial conditions are just average the way. It will have this mass, actual mass of the order of 10 to the minus 4 electron volts. One can say that there are many experimental efforts, but there's a missing solid player at this range. And yep, so microwave emission, I've explained you how the, how the actual field in a magnetic field can excite electromagnetic waves coming out of surfaces where the index of diffraction changes. I've also explained that this emission is relatively weak, but that we can just enhance it by just putting many different layers of dielectrics one after the other. And this is essentially what is the Munich or magnetized action dark matter experiment that I hope it will be taking place relatively soon. Certainly now it's, I mean, we are just working on that. We are working on enlarging the collaboration to afford the many working packages that we have to develop, essentially the the magnet and the tining of the dielectrics. I know when a theorist has one experimental idea, everything is very easy. When you mess up with experimentalists and they just break up, they experiment all the practical details you have to care about, then everything is a matter of five years. So more or less these five years is the time that it might take to see our experiment in food glory. But we are having small, we're having nowadays small prototypes of growing dimensions. So that's all. Thank you very much. Now if you have questions, I'll be glad to answer. Okay. Thank you very, very much. It's been a very nice, very nice webinar. So before we start with the questions, let me remind everybody that you can make your questions via the live chat on YouTube, which is on the upper right. And you can also make some questions via Twitter, again, in this, on this, on this address. Okay. So that's it. You might want to know that we've had as much as 30 people watching at some point just to have an, an idea of your audience. Right. So, so let's, let's have a look at the, at the questions. Is anybody in the audience right now? Does anybody have a question? I have a question. Please go ahead. So have you yet? I think it wasn't slide eight, you show that action that matter, have it that say 100 MEVs, what, milli-electron volt is excluded or something like this? Yeah. Okay. But when it's experimentally excluded or is that we really don't know how to produce it? No, we, we know that it is produced too much. Sorry. Do you mean in this region here? No, no, when it's heavier than 10 to the minus two. Here, there is no problem with dark matter. The problem is that actions are too efficiently produced from supernova. So this, this exclusion has nothing to do with the action that matter is Astrophysics. Actually, you can, you can, you can see here there's a bound from supernova, 1980, 1987, and also there are some other astrophysical bounds. So there wouldn't be any problem of, of having action that matter here. But in this scenario, in the scenario in which we have average initial conditions, we know that it has to be smaller than the real intensity. So it will be 10% or 1% something like this. I think, yeah, in this range would be smaller than 1% here. Okay. I see. Thanks. So any other question from the people here? I have a, I have a, okay. Go ahead. Go ahead. Yep. Sorry. I have a second question related. I think it was like 24 or something like this about the boost factors. So, yes. So for instance, this plot in the lower left corner, so the boost factor have super different shapes. So they only depend on the number of layers. That's all. That's the only thing that you're buying there. It depends. So the only thing that we changed here is the distances between the layers. And it's not very clear how, it is not very clear how to do it. But we can train a computer, and this is what we did here, to have a certain boost factor in a certain, in a certain band. So we decided that we wanted to have, for instance, in 200 megahertz, okay, which is this band here. We wanted to have one minimum boost factor. And then, so the computer can just find the distances between these layers that you have to put in order to have these response, okay. If you want to have, so if you allow to have a smaller bandwidth, then the computer can give you, let's say, you can tune the distances to have a larger boost factor, but always in this smaller range. And if the band is much smaller, the boost factor can be larger. But the only thing we changed here is the distances between the layers. Okay, thanks. This, yeah, essentially here, I think it also depends on the distance, on the fitness of this layer, and in the index of perfection. Here it was about something like three millimeters and n equals five. But, but you can do it, you can do essentially this game of changing the distances, changing the broadband or narrow band of the experiment for any index of perfection and for any thickness. Okay, fantastic. Any other question? So I have a very basic axiom question. So why, where do we get interactions that affect the EDM and not the magnetic dipole moment? So the, well, I can answer very, well, maybe it's not that simple. So that's the theta. So the, the action field is just the generalization of theta, this CP, CP violating parameter. Okay. So the magnetic, magnetic moment is not CP violating. So you, you know that it cannot couple, you cannot couple theta to any magnetic dipole moment operator in the diagram. So you cannot couple theta, but you can couple theta square. And this is what happens. So you can, you can, you will have interactions with the magnetic moment, but you're, you know, that they are going to be suppressed, but not by one over FA, but by one over FA squared. I see. I sense. But also, you know, you know what the prediction that we have for the relic axiom oscillations today is for the 10 to the minus 19. If you make this thing square, you know that this will be something like 10 to the minus 40. So that actually this is super, super, super, super small. If you want to couple something to the magnetic, to the magnetic moments, you, you have to know that also these, so when, when you put in a magnet, maybe this is not a very good idea. Sorry. Because when you have in a magnetic field, or you can just put an electric field, an external electric field, and then the, your axiom field will generate a magnetic field that will interact at a moment. But of course, one has to see how you, how you create this electric field. But yeah. Okay. Okay. At the level of the Lagrangian, you are, these couplings are forbidden at linear odd. I see. I see. And the other question I had is, so once this is built, was the five years supposed to be built. So then what is the time, the timeline? How much do you think that the experiment needs to run for? How long does it need to run? Is that you get a result or an exclusion? Well, the good thing is that we can start whatever for the frequency that we want. And so you have, you have seen, so this is my, this is my job. Essentially, this prediction here is very white, right? So in five years, my plan is to essentially reduce it so that in a year, we can be done afterwards. Unfortunately, if, if after that year, we have not found the axiom, of course, this doesn't mean that this doesn't mean that we are not going to find it, but then I cannot give any time escape because it can be anywhere, right? It can be in this scenario because these are two cosmological scenarios, right? So, so the initial conditions of the axiom fields tell me what are related with the axiom mass today. Okay. So if, if we are in this cosmological scenario, I know that we have to be somewhere here and I will narrow down so that, but these experiments are very easy. So once you find the, once you find the signal, you, you find in a matter of, in ADMX, it takes one second or two seconds to find the signal. The problem is to, to know the frequency. Okay. In this case, it will be the same. And I think in one year, if I, if I have a 10% or 10% prediction for the axiom mass, I can do it in one year. Okay. But if I don't find the axiom here in my prediction, then this will mean that either axioms do not exist or axioms took these very particular initial conditions after, yeah, because inflation gave only one initial condition. Okay. In that case, I have, essentially, I can, I, the axiom field can have any mass and still I will have all the dark matter. In that case, in that case, I think we will run for another five years and then we will, we will have to give another webinar to tell you about the plans. Okay. I see. So, so related to this, there's a question from the, from the live chat from Shantanu D, who asks, at what point does axiom dark matter is ruled out? Sorry. Can you repeat? So he asks, at what point can you rule out axiom dark matter? We are very far away from that point. But so let me just, I think there will always, there are ways of escaping all the, all the, all the constraints that one can use to kill axiom dark matter. Okay. But I'm going to tell you the two more important, the two most important ways of ruling out this, this thing. So this is, so my argumentation is based on the idea that axiom dark matter is, takes initial conditions, which are homogeneous thanks to inflation. So we are in this, in this scenario, or takes these random average conditions. Okay. So there are two or only two of these options. Now, in the first option, in the first option, axiom field exists during inflation. And if, because it exists in inflation, it gets quantum, quantum fluctuations that are amplified to distances like we are probing with the CMB. And then essentially the axiom fluctuations, they become dark matter fluctuations that are imprinted on the CMB in the form of isocurvature perturbations. Okay. Now these isocurvature perturbations we don't see. So they, okay, so they are a way to constrain the axiom field. Now, the size of these isocurvature perturbations is proportional to the Havons scale during inflation. And this is something that we can measure in the future. At the moment that we can always say is that these constraints do not affect because the Havons scale during inflation was very small. But if a gravitational wave, so if we detect a primordial gravitational waves with the next CMB pros, polarization pros, then we will know that the Havons scale during inflation was very large. And this will mean that we essentially, we will have huge isocurvature perturbations. And these scenarios ruled out. This will allow at least to rule this scenario in the simplest case. And then we are left only with this scenario. And this scenario has also other problems or other possible constraints. For instance, you can, I think the easiest is just to, you know, do the experiment and rule them out because here you only have to search in this region. But there are also some, some other interesting effects like that I was telling you before that in this scenario, the dark matter distribution is very, you know, is very in homogeneous at very small scales. And this gives rise to what we call axial mini clusters, essentially. So in this scenario, there will be a lot of small scale structure in dark matter. So there will be a lot of clumps of dark matter. Now these clumps, they can be in principle ruled out or find out with microlensing status. There was a, there was a recent paper by Fairman and Marsh that they will say that they were saying that if the amount of dark matter in mini clusters is larger than 70%, is 30%, 10% I think. If it's larger than 10%, then this scenario is ruled out, okay? Because they, because they need some microlensing status and they didn't find microlensers. So this scenario could be ruled out by, yeah, by substructure microlensing or just by a direct detection. But you see, there's, for me, there's no, so these are the strongest arguments that could kill axial dark matter. But you see that there will be a sequence of different arguments that will allow you to kill all these models. And to every of these, to every of these arguments that I've given you, there are no drawbacks. But okay, this maybe is good for another question. Okay, super. And there's, there's one question coming from Paolo Silva, which is somewhat long. So let's see. So the action through the Peche Queen mechanism was introduced to explain why theta is so small. When you come to the dark matter problem, it seems that the amplitude of theta, so theta zero, also has to be very small. So he's asking if, aren't you changing a naturalness problem for another, and then washing out the original motivation of introducing axions, if they are to explain the whole dark matter in the universe? Okay, that's a fair question. So let me just rephrase it. So the idea is that, yeah, in the standard model, we have to explain why the theta is smaller than 10 to the minus 10, right? Now I'm telling you that a natural way to do this is just to invent the action because whatever initial value of the action field, it will end up in zero. That's great. Now if we go to the dark matter dense, to the dark matter plot, and if I take a very small value of the action mass, if I want to have the full amount of dark matter, you see what is happening. I have to choose an initial value, or nature had to choose initial value of the misalignment angle, which is very small. Okay, this only happens, and that's Paolo Silva. This only happens if F8 is very, very large, the action field is very, very small. And we are in this scenario, in this pink scenario. Okay, this is what that's the reason I call here tuned anthropic. There are some anthropic arguments that tell you that this might be anthropically selected. But so wrapping up the question, the idea is that if, yeah, so first of all, in this scenario, sorry, in this scenario, in which I can, or nature can choose the initial value of the action field, you can see here from this diagram that the action field has to be tuned to be close to zero. But this tuning is much, for forgetting dark matter, is typically much smaller than the 10 to the minus 10 that you have to tune for the strong CP problem. Okay, because this, so the energy density of dark matter depends with the square of this initial value. So then essentially, yeah, it is a square root tuning. But in that case, you are right that one can, I mean, one has to tune the initial value of the misalignment angle to get the dark matter. But this is not generally the case. So here's what we have to emphasize that if the, so if FA is smaller, so if FA is of the order of 10 to the 11, something like this, then one gets the right amount of dark matter with a misalignment angle of order one, or with just these random initial conditions. Okay, you're just making making different patches of the universe have different values of theta at the beginning. And this is, yeah, this is scenario. And nothing could be more natural than this. This is also, yeah, this is my favorite scenario. And because of that, I mean, you were, you are taking the most natural and predictive initial conditions. And you are getting the full amount of dark matter. But this only works for one value of FA. And yeah, you might not like this. In this simple scenario, you can have much larger value of FA, but then you have to tune the initial misalignment angle. But so just remember that the tuning is not as large as 10 to the minus 10, in any case. Hello. Hello. Can you hear me? Yes, very well. I have a question which may be, in the case that you have this inhomogeneous distribution, could it be that the scale of inhomogeneity is so large that we may be in some kind of place without dark matter? And so maybe we don't detect it even though it's there. What is the scale of these inhomogeneities? That's right. That's a very nice question. So the scale of these inhomogeneities we can just, we can predict. So the size of these books is, if I remember correctly, something like, let's say one parsec. So a little bit smaller than one parsec. So this means that, yeah, now you can imagine that in this parsec, there are many, many of these regions that had a lot of, that were around pi, they will collapse and you will form these, I don't know, just mini clusters like small clusters of action dark matter. And there will be mostly vacuum between them, right? So we'll have in one parsec many large amount of these lists. And so the answer to your question is, for instance, inside of the galaxy we have the millions and millions of these small clusters and there will be, of course, some remaining dark matter which is not in the form of clusters. But it can be, so I think at least one can estimate that at least 10% of the action dark matter will not be localized in, will not be in action dark matter clumps. It can be larger, but we are working on this. We don't know the answer very well. But it could be that, yeah, maybe instead of 100% of dark matter that you can just detect on Earth at any time, it's only 10% and the rest 90%. It is just in a collection of very dense clumps, which actually they are not so dense, but they are much denser, of course, than a distributed action dark matter. And these clusters, even if they, yeah, even if they, let's say the estimate of collision with the Earth is something like 10,000 years or something like this. So we will not see, so we will not leave to see one of these collisions or let's say a dark matter clump going through the Earth. So this might be, yeah, this might be a problem for tidying detection. That's why we are starting it a little bit in depth, but at the moment, I cannot tell you anything, anything more. So I think I already told you very much that your question is definitely a very important one. And that in no case, all the dark matter is just inside all these clumps. So there's always a remaining 10% something like this, which is outside. And at least one can just try to detect that 10%. Okay, very good. I don't see any other questions. I don't know anybody else. Apparently not. Then let me check, there's no more, there are no more questions on the live chat. And I don't know why I check Twitter, but I'm checking Twitter. And apparently no, there are no questions on Twitter either. So I think that's, that's it. Yeah. Okay, so thank you. Thank you very much. Javier, it's been a fantastic, fantastic webinar and a very good discussion. So let me remind everybody else that we're having our next webinar on the first of March. And if I am not wrong, I think it's Daniel Lopez who's giving the webinar, right? So we'll see you later. We'll see you by then. Thank you all for joining. Thank you, Javier, once again. It's been a pleasure. Thank you very much.