 Thank you. Good morning everybody. Yeah, so that's a kind of off-mainstream topic maybe this morning. So Dan has proposed me this title, use of neutrinos while still learning about them. And when I was starting working on this talk, I realized that basically covers everything about neutrinos. So I just decided to select a few pictures, few items about neutrinos as an introduction to this talk. And then the main focus of my talk will be on react on neutrinos and possible applications for non-proliferation. Okay, so fundamental properties of neutrinos. So that's the basic picture of our standard model now with the elementary particles of matter. So we have 12 elementary particles of matters and here's three fundamental interactions plus the gravitation of course. And the neutrinos are there, three of them associated to the charged leptons. And they're coupled only to the weak interaction, meaning that for the neutrinos that we will talk about today at the 1 MeV scale. This is the cross-section for interaction, 10 to the minus 43 centimeters square. So these are obviously very penetrating particles. They can go through the earth with no interaction. First consequence is of course, that means challenging detection techniques. On the on the other hand, we can see them also as a kind of unique probes of intense source that we could not reach otherwise, because they will escape any kind of source and bring us information about it. Yeah, so few slides about the neutrino footprint. So how do we are, how do we get convinced that they are really there? Of course, you know that this problem of the beta decay, what was observed first by Becquerel and others is the emission of one electron when the, when a nuclear, when a nucleus undergoes a beta decay. In this scheme, you expect a direct energy. One, one well-known energy for the for the electron and what was observed is this continuous distribution, meaning that part of the energy was escaping somehow. Actually took some time to really get sure about this missing energy. Discussing with an old colleague from my lab, I got this picture from Becquerel himself. He used is a beta source next to a simple dipole magnet like a small round box with several slits and he took a picture as the first history picture when he discovered the radioactivity and you see that actually electron rays are escaping through all the slits. That was the first evidence for a continuous spectrum. I don't know if this picture got forgotten or people didn't believe in it, but it took more 26 more years to really get convinced with the last experiment with the calorimetric measurement that indeed the mean energy was about half of the the one we expected and so something was some energy was escaping possibly with the neutrino. So 25 more years later in 56 this guy got the first signal in a detector and so that's a good example of one intense source of neutrino that man can control. This is a reactor and they use a pretty small detector. We will come back to this kind of technique later in the talk and they could observe up to three, I would say only three neutrinos per hour, but they could really argue about the background and show that these events were disappearing when the reactor stopped and they were awarded the Nobel Prize for this discovery. Another important observation is the the neutral current process that was first observed in the Gargamel chamber at CERN. This is back in the 70s. Here you see the picture. There is nothing coming. At least we do not see anything any tracks in the bubble chamber. One electron is recalling pretty fast emitting branch trallons and electron-positron pairs and no other products. So the interpretation of this picture is the elastic scattering of muon neutrino from the beam delivered by CERN in this bubble chamber and we do see only the recoil of this electron and this is a proof that neutral current exists and that's an important consequence of the Gauss theory formalism of the weak interaction with the theory of bifermi for the beta decay. Only charge currents were described and really the neutral currents is a consequence of the Gauss theory formalism and that was observed here for the first time. Then one more about the cosmic or solar neutrinos, the very first guy trying to catch these neutrinos from the Sun was Davis using the old gold mine and this is a large tank of organic liquid and he was trying to detect few interactions per day. And then later on in the Kamukande mine in Japan they came with a very large detector where they could detect the charing of light induced by electrons struck by the highest energetic neutrinos from the Sun and so the one electron was recording. This is similar to the Gargamel event we were seeing but with an electron neutrino and the direction of the electron and of the associated charing of cone is forward. This is the favorable kinematics and reconstructing the direction for all events they could prove that it was following the direction of the Sun and that's the first picture of the Sun from deep underground. So it proved that actually the Sun really burns because there are fusion processes inside the Sun that emit neutrinos via the beta plus process. And another interesting event still in this super Kamukande detector was in 1987 where we had the chance that a supernova exploded not too far from our galaxy and in super Kamukande they could see this 11 extra neutrinos with respect to standard background signal within a very short time. This is the time in second here. So this is really a big burst of neutrinos that again confirms the scenario of the explosion of this star and conversion of massive quantity of proton into neutrons. And so for this observation of the solar and supernova neutrinos, this guy also got the Nobel Prize in 2002. And last interesting fact about neutrinos, I told you there were three neutrinos because we associated them with the charged leptons but why not more neutrinos? Actually, we do have a nice information about this number of neutrinos. First if you look at the decay width of the Z boson, so that was an experiment done at CERN that was with the lep collider, positrons against electron. You tune the energy so that you are just at the resonance of the energy in the center of mass is the mass of the Z boson. And so the experiment can accumulate millions of Z bosons and look at the width of this particle. The width of course depends on the lifetime and how many channels are open. You can count everything you know and at the end there is a remaining missing contribution to feed the data. And then you try to plug in some neutrinos two, three or four, and that's the error bar that the feed gives you. So that's really three neutrinos. We know that there are only three neutrinos that coupled to the Z boson that are connected with the weak interaction and that are lighter than the Z boson. Of course, if there is a very massive neutrino we won't see it here because the decay channel is not open. Another impressive information about the number of neutrinos in universe is this wonderful data from the Planck satellite that we got early this year, late last year. So what you see here is a map of the microwave background from the that's the remnant from the the radiation of the Big Bang just before, just at the moment where the light was able to propagate freely in space. Before that the density, the temperature was too high. So the matter was in a kind of plasma state. Electrons and photons were interacting very strongly. When the temperature comes low enough electrons are catch by the protons to form hydrogen atoms and the light can escape freely. And that's what we see now. And of course people are analyzing this very small inhomogeneity in the in the in this background. This color here is at the level of 10 to the minus 4, 10 to the minus 5 level. Okay, it's very uniform temperature, but now they are able to see very small differences. And this analysis is you look at the different angular scale. You look at this difference, this inhomogeneity at different scales. So small l is large scale like dipole or quadrupole oscillation in this map and high l is smaller and smaller scales. And I won't go into the detail, but basically that's what the Big Bang theory and then expansion predicts. That's a big peak in the distribution of this inhomogeneity. And then we have several harmonics that are exponentially damped. And what's nice is that the position of this peak depends how the the meter and radiation was in equilibrium before the light decoupled. And so what the position and the amplitude of this peak depends on the number of degrees of freedom, numbers of radiation that was there in the universe just before the decoupling. So again, if you start plugging different numbers of neutrinos, you see that the amplitude and the position of this peak will move and fitting this very accurate data, that's the number you get. So that's very impressive that just looking at the background, this CMB in the sky, you can infer very accurate constraint on the neutrinos. So I stopped there for the general presentation of neutrinos, but for sure at this point, we know that neutrinos are there and they are even the most abundant matter particles in the universe by far above electrons, protons and neutrons. Only photons are above in numbers, but I don't see them as a matter particle. They are mediator of the electromagnetic interaction. Before going further in my talk, of course, I need to say a few words about neutrinosilations. So that's an idea pointed out first by Ponte Corvo back in the 50s and what we have is three neutrino states and if you look at the quantum numbers, they are all the same. Well, there is one electron, one muon and one tau neutrino, but this is not a conserved quantum number. As far as I know, there is no symmetry that is asking for this number to be conserved. So there is these three states could mixed and if you have a current superposition of these states, you could have this kind of interference phenomenon predicted by quantum mechanics and when this current packet propagates, if they have different masses during propagation, the contribution of the three wave packets during the propagation will evolve and when we recombine the wave packet to detect the neutrino state, we may observe a change in the flavor of the neutrino. That's the phenomenon of oscillation. So it requires different masses between the three states. So at least two non-zero masses and since we know that these guys have very very small mass, actually you can have this current propagation of the wave packets over macroscopic lengths and that's what we observe in many experiments. So this is the basic formalism. Let's assume a neutrino is produced by your reactor. We will see more in detail, but here we have only beta minus decay from the fission products. So it's pure electronic neutrinos, anti-neutrinos actually. Then we have some propagation until we detect the neutrino and again we must couple this neutrino to the weak interaction and tag a specific flavor. Let's assume that we use only beta inverse process. So only the electronic neutrino are seen here. What's happened in between these two, the production and detection, is that so these electron, muon and tau flavors, they're told with the interaction when you produce or detect the neutrino. But for the propagation between source and detector, what matters are the mass eigenstates, the Hamiltonian eigenstates. And there is no reason why these two sets of states are the same. We have the same trick for the quarks, for instance. There is a mixing matrix between the mass states and the flavor states. So that's what I call the big theta. That's the mixing that converts from the flavor to the mass eigenstates. Once you project it, so we started with a pure state of electronic. Through this matrix, we have a combination of nu1, nu2 and nu3 and during propagation, if the masses are different, the contribution, the weight of each contribution will change. And when we apply the detection, we have to project back on the flavor states using the inverse of this mixing matrix and starting with a 100% electron state, we may end up with another contribution, a non-zero contribution from the two other flavors. So to describe this mixing, we need three mixing angles to mix 1 and 2, 1 and 3, 2 and 3. And we need differences in the mass when you write down the formalism what enters in play is the splitting between the mass square. So there are only two independent splitting parameters. And that's where the way we generally write this mixing matrix. So it's a product of three rotations. Here we mix state 2 and 3, here we mix 1 and 3, here we mix 1 and 2 and the C and S are just the cosine and sine functions, usual rotation matrix. And today we know basically all the parameters. The last missing angle was the theta 1, 3 and has been measured laterally with reactor experiments. I will show you the results later in my talk. And the mass splitting, you see, they are very small, 10 to the minus 5 EV square, 10 to the minus 3 EV square. And they are well distinct. There is a factor 30 difference between the two. So it means that in the mixing process you have really two decoupled regime because of these two separated mass splitting. Last point about oscillation, you heard a few weeks ago that Nobel Prize was awarded to this guy for the discovery of oscillation. This is the third Nobel Prize for neutrinos. And so the first observation of clear signal of an oscillation was again in super-communicon day in Japan where they were looking at neutrinos produced by cosmic rays in the atmosphere. So you have down-going neutrinos produced just above the detector. But also up-going neutrinos produced in the atmosphere on the other side of the Earth, going through the Earth and then interacting in the detector. And if you look at the down-going versus up-going neutrinos, you do see the difference in counting rates. This is corrected for solid-angle effects and everything. The difference is just because here you have a longer baseline, a longer time of flight that gives time to the neutrinos, the muon neutrinos to convert it into tau neutrinos. And the second experiment was about the solar problem. All experiments were looking at the electronic neutrinos from the Sun. And they were all reporting 50 percent, 70 percent, 50 percent missing contribution with respect to the prediction. And this guy came up with a nice idea where instead of looking at the electronic neutrinos using beta processes, he proposed an experiment where he was looking at the breakup of the deuterium in heavy water here. A neutrinos could elastically interact with, quasi-elastically interact with deuterium, just enough energy to break up into a proton and a neutron. And he was detecting the capture of the neutron after a while. And this process can be induced by any neutrinos, electron, muon, or tau. So he was measuring the sum of all the, all the flux. And so the sum was perfectly in agreement with the prediction. So it was really approved that all the missing contribution pointed out by those experiments were due to the oscillation into other flavors. So oscillation is now a well proven and well understood process. Okay, so now let me come to the main subject of my talk, which is the reactor neutrinos. And we will see that they also brought a very nice contribution to these oscillation measurements. So as you know, the neutrinos, the reactors are very intense source of neutrinos. This is the basic idea when we have the fission when when the heavy nucleus like uranium nucleus is a fissioning. The two fission products are unstable nuclei and they are neutron-rich nuclei. So they will evolve towards stability using by beta-minus processes and each step corresponds to the emission of one electron and one anti-neutrinos of electron type. So on the average, of course, there are many many possibility to break this nucleus in two parts. But on average, there is six beta decays from the two fission products. So six neutrinos emitted per fission. One fission is roughly 200 MeV. So if you want to produce one gigawatts of power, this is something like 10 more than 10 to the 20 neutrinos per second emitted by your reactor. So very intense source. That's nice because we need a lot of neutrinos if we want to detect few of them. It's a pure source of anti-neutrinos of electronic types because of this beta-minus process and the energy is well defined too. This is nuclear processes so between 0 and 10 MeV. Now if you want to work with the flux and energy spectrum of these neutrinos, we are facing a quite complex spectrum. Because as I told previously, there are something like 800 possible different nuclei that are produced with the fissions and the number of beta branches are several thousands. So if we want to enter the guts of this total neutrino spectrum, it's a kind of nightmare. The sum of all the neutrinos that will be emitted by the reactor, they first they come from all the possible fission products. So we need the activity of all the products. How many fissions, how many of these nuclei were present and what are their lifetimes so that we can compute the activity? And then this is multiplied by the spectrum, the total beta spectrum for this fission product. So that's the first sum of all the fission products. Now I'm looking at one single fission product. Of course, we have a DKS scheme with several excited levels in the daughter nucleus. So we have to make the sum of all the beta branches, so weighted by the branching ratios. And this is the expression for one single branch from the grand state of the father nucleus to the to one specific state of the daughter nucleus. How do we write down this this beta spectrum? Well, there is a first piece of theory that was provided by firming the back in the 30s, which is let's say still quite simple. There is a normalization factor coming from the weak interaction and then the Fermi function that take into account the fact that the the leptons are escaping from a charged nucleus. So you have Coulomb effect that changes a little bit the energy spectrum. And then the shape of this energy spectrum is dominated by the phase space factor here. That is this expression where E0 is the endpoint of the transition, the maximum energy that that is split between the trino and the charged leptons. Now, this is not the end of the story. This is the very basic view for a simple transition in reality. Depending on the quantum numbers from the connecting the two nuclei, you can have what so-called forbidden processes when you change the parity or for higher change in total momentum, orbital momentum. So in that case you have some complicated nuclear matrix element that enter into the into the play and that brings extra energy dependence. And also this is only to first order. Then there is a whole bunch of corrections. QED relative corrections, finite size Coulomb correction, the fact that the weak interaction itself occurs in the finite size nucleus. The screening of the electron cloud when the charged leptons escapes the nucleus and the weak magnetism effect, which is a term that comes when the energy of the leptons is high enough, you have kind of weak magnetic effect like you have magnetic effect for in the electromagnetism case. So that's a lot of information to deal with and we will see that it's very difficult to get that from all the measurements, all the nuclear databases, although we are doing really great progress in that. But really what we want to do with this prediction is to look for oscillations, for instance. So we are looking at small distortion in the spectrum, let's say the 10% scale. How can you be sure that computing all this stuff, you will end up with a 10% accuracy. This is really challenging. So the real breakthrough was brought in the 80s just before people started to look for oscillation at reactors. Actually, they were waiting with the data. Mr. Schreckenbauer and others installed this experimental setup at the ILL reactor in Grenoble in France. They used this long tube under vacuum that could go very close to the core of the reactor in a very intense thermal neutron flux and they exposed some foils of uranium or plutonium isotopes to this very intense flux. Of course, there were many fission processes induced in this foil and beta processes from the instable fission products and some of the electrons coming from these beta decays were just escaping right in the axis of this tube and then it entered in the magnetic spectrometer. Very accurate analysis of the momentum of the electron and this way by scanning the field of this magnet, they could just build up the total beta spectrum from associated to one fission of uranium or plutonium isotope. So that these are very precious data and very accurate because of the this kind of because of the type of experimental setup that was used and they are really a unique reference. We know that the total contribution of all the beta electrons from the fission is that. That's only the sum. We do not have all the ingredients inside, but we know that the sum is that. That's a really big piece of information. Now I cannot use that for my for my neutrino experiment. I need neutrino spectrum, not electron spectrum. So how do I convert that? Well, that's the you can say it's very simple. There is energy conservation. So the energy of the electron plus the energy of the neutrino is always equal to the endpoint. So if you look at a single branch, this is an electron branch with endpoint E0 because of Coulomb effect, the start of the spectrum is not zero here. And if you apply energy conservation, the neutrino spectrum is just the mirror image of this beta branch. Very simple. But you cannot apply that to a complex spectrum where several branches are contributing. That's the example of for this this atom that I show you here. It's a sum of three or four branches and you see that. So this is the red curve. If you apply the mirror symmetry with respect to 1.5 that gives you a completely different spectrum than the real neutrino spectrum. That's because this mirror image has to be applied on every single branch, not on the total sum of the branches. And that's an information that we do not have from these measurements. So how do we do? So the idea that they developed is to use a conversion process. Procedure. It's a very simple idea. They use kind of virtual or effective beta branch. They are not corresponding to any existing nucleus in nature, but they have the good shape. This is a permissory, a phase space factor, the fermi function, etc. And we leave the end point and the amplitude of this virtual branch free and we fit that to the to the high energy part of the spectrum. So you see here that the first branch is really fitting very well the end of the spectrum. Once this is done, you subtract the contribution of this branch, you get a new total spectrum. You fit again the high energy part with one more branch, etc, etc. And at the end it's possible to completely zero the initial spectrum. And so that the sum of all the virtual branches is exactly the total electron spectrum that was measured. Okay, so these are virtual branch, but they do have the good energy dependence from the cermissory and the sum of all these branch is banged on the electron data. So now you can take every single virtual branch and convert them into into neutrino using district here because for one single branch you can do that. So use this mirror image for all the single branch and compute the sum of all the comparative branch again and that gives us the total neutrino spectrum. So that's how we got the reference neutrino spectra for the different fissures in the reactor, the main isotopes, uranium-35, plutonium-39, plutonium-41. Three different curves with a complete error budget. We have a correlation matrix for this spectra with normalization, shape errors, everything is well detailed and argued. And that was the reference over the last 25 years. Okay, so it looks like we are ready to perform neutrino reactor neutrino experiments. We still need a detection process. And the golden channel is really this one, the inverse beta decay. Thinking about a neutral current process is not a good idea because you have only one single particle recalling like in the Gargamel chamber. Any gammas in your detector in the rock outside or any kind of background will induce the same kind of signals. It would be very difficult to find out your to recover your few neutrinos per day among this huge background. So this process provides a very selective sequence in time and in energy. So the anti-neutrino of electronic type interacts with the proton of the target and this is the inverse beta process. In the final state, we have a positron and a neutron. The positron, what we use for the target and the detection actually is an organic scintillator because it combines nicely the fact that it has a lot of protons for the target and as soon as the particle propagates into that liquid, they will give signals. So the positron gives a prompt signal. That's a first flash of light from the this positron. If you look at the kinematics, these are very heavy objects, particles, so basically all the kinetic energy of the neutrinos is transferred to the positrons. So that's a way to measure the energy of the incoming neutrino. And then the neutron is emitted with few 10 of KV of kinetic energy. It will take few microseconds to lose its energy to thermalize in the liquid and then start to diffuse like in a reactor. And so and what we will do generally in the organic scintillator, we will dope this scintillator with a neutron absorber like gadolinium, cadmium, lithium, whatever. So we'll have a clear signal about the neutron capture, few 10 microseconds later. So that's a very characteristic pattern, a lot more difficult to simulate. So it reduces a lot the background and that's the way to detect neutrinos. And so we have the emitted spectrum. What I was showing you, the reference spectra from the ILN data, they were in log scale. If you look at linear scale, they have this exponential shape. And the cross section for this inverse decay process is a kind of parabola starting at 1.8 MeV because there is an energy threshold to induce this reaction. Because the neutrons and positrons weight more than the than neutrino and proton interest. So multiply the two curves, we get this kind of expected shape for the detected spectrum when looking at neutrinos from a reactor. We can do that for every fissioning isotopes, uranium and plutonium. And we do see a first interesting fact is that the the number of neutrinos detected after one fission of plutonium is less than for the fissions of uranium-235 or plutonium-241. Just due to the fact that there are different nucleus fissioning, so different distribution of mass in the fission products and different kind of neutrinos. Okay, so now we have all the ingredients that we need. We have our predicted spectrum. We still need to talk with the reactor operator to know what is the full power, the total power of the reactor, what is the history of the fuel so that we know the relative contribution of these various isotopes, but that's something we can do with reasonable accuracy. And then we can have the prediction versus time of the expected neutrino spectrum that we should see in our detector. And if we want to look at oscillations away from the reactor, so the reactor is here at distance zero. Our prediction is normalized to one, just neutrino flux, and I'm looking at the neutrino flux versus distance for a mean energy of 4 MeV. That's about the energy, the mean energy that I have from the prediction. So, in my detector, since I use this beta-inverse process, I'm only sensitive to the electronic neutrinos. They certainly do not have enough energy to produce charge muons or charge tau on shelf. You need tens, hundreds of MeV to produce that. This is not possible from a reactor. So we can only detect this anti-neutrino of electronic type, and if they do oscillate, they will convert to some fraction in other flavors. So we will see a missing contribution in the detected flux. So that's a disappearance. All the reactor experiments are looking for disappearance of the electronic neutrinos. And that's the formula plugging in the these rotation matrices and mixing matrices. One has the amplitude of the phenomenon given by the mixing angle and the frequency given by the mass splitting and the position, distance between source and detector, and the mean energy of the neutrino. So here the mean energy is fixed to 4 MeV. The distance between the source and the detector is my axis here. And the mass splitting, I told you we have two different mass splitting that differs by a factor 30. So I expect two different regime of oscillation at a factor 30 difference in the distance. Okay, that's what I observe here. The first mass splitting, you expect the first oscillation around between 1 and 2 kilometers. And the second mass splitting is 30 times larger. So the first minimum will occur 30 times further away from the reactor. And this is the second parameter here. So that's basically the expectation from the oscillation processes for the reactor neutrinos. Okay, so for the reactors, I would say no. If you go through huge amount of matter, as it is the case for the solar neutrinos, they are emitted in the core of the sun where the fusion occurs and they have to exit the sun. So they go through huge thickness of matter. And there you have a kind of effective potential of inside the matter because you have more electrons than muons of tau in the matter. So there are more channels open to electronic neutrinos than muons of tau neutrinos. So you have this kind of matter effect that can also change the projection in the flavor states. But you have to go to hundreds, several hundred kilometers through the earth or through the matter. Otherwise, it's just a dependence on the L over E ratio. Okay, so you have this sinus, this is a log scale. Okay, so this curve is just a sine curve, same oscillation going up and down forever. And then it couples with the second one that will go up and down again forever. As a function of time, you would like to follow in time the evolution of the neutrinos. Okay, so this you can, we cannot do because the large just neutrino detectors we have is the super cameo conduit. It's 40 meters diameter, 40 meters high. That's a huge volume. But still the probability for reactant neutrino to interact with the within this huge volume is maybe one over 10 to the 15. So you need to send 10 to the 15 neutrinos through this volume to have one interaction. Okay, so there is no way you can have several interaction of the same neutrino. I don't know if it answered your questions, but we do not follow in time the neutrino. There is only very few neutrino that interacts sometime in our detector. With this process, it is absorbed anyway. There is no, it is not an elastic scattering, it is absorbed on the proton. So the only information we have access to is the dependence on distance and energy. So from the theory, we know that there are some particular distances where the oscillation will be at the maximum. That's where we want to put the experiment. And once the detector is fixed at one distance, you can look further at the energy dependence because this plot is for one energy fixed at 4 MeV. If I decide to look at this oscillation, I will set up a detector two kilometers away from the reactor, but then I still have some energy dependence from 2 to 8 MeV. So because of this function, I will also see oscillation pattern in my detector at a fixed distance. So that's the two lever arms we have to study the oscillation. Okay. So the source is changing, but what will happen is that when you start a reactor with fresh fuel, that you will be just on the red curve. And during the cycle, you will kind of mix the red and the green one. So the neutrino flux will decrease with time. I will talk to that. So we just have to follow the history of the reactor, and we have a prediction of the neutrino flux versus time. This is true. But just because the reactor accumulates plutonium, we have to take that into account, and we know that the flux will decrease in time. So we have a prediction versus time, and we have data versus time, and we compare the two. And the difference is only the oscillation. But it's true that there is an evolution in time of the neutrino spectrum. That's correct, because of the operation of the reactor, which will not be the same for the sun. For instance, the sun is kind of constant to our time scale. Okay. So that's the oscillation we expect at reactors. I have to hurry up because I would be too long otherwise. So the very nice experiments in Japan, they still with this in this commutant, the mind, they had a 1000 ton of liquid ventilator in a big sphere here. And they were looking at all the reactors around from Japan and Korea, tens of reactors. And they were following the history of all the reactors, talking to the operators, having all the power history, the refuelling time, the loading maps, et cetera. And they computed the expected spectrum. They run the experiment for a few years. And that's what they obtained. They put all their events in bins of L over E, because this is the ratio that is important for the oscillation pattern. So the expectation with no oscillation would be a horizontal line one. And that's what they observed. To my knowledge, that's the only experiment that do see a first maximum, minimum, second maximum, second minimum. That's really nice pattern. It's very difficult to obtain such a curve. You do see that the amplitude is not a perfect sine curve. It is kind of damping. And this is due to the resolution effect. That's why it's so difficult to have several patterns like that. Usually you try to work in this area, and it's very difficult to get several patterns. They do not know. So they have just a weighted contribution from all the reactors, depending on their power and distance. And this neutrino that you detect at this time, you absolutely do not know where it was coming from. But when you build up all the events versus time, you know that the mean answer should be your prediction. You cannot affect one neutrino to one detector. So it's true that you have some uncertainty in the mean baseline. For instance, there are reactors that are 100 kilometers away, others that are 60 kilometers away. So it's kind of smearing. That's why also you see that in theory, these oscillations should go back to one minimum back to one. They do not go back to one because of all this smearing process. Oh, in that case, maybe it's comparable in flux. One kilometer away from a reactor, I think you have something that the reactor is really dominating in flux by far. Here it's kind of long distance. So maybe it becomes comparable. But anyway, we are not sensitive to the solar neutrinos because they are neutrinos. The reactor emits anti-neutrinos. Here, these are neutrinos from the fission. The sun is fusion. And with the beta inverse process, only the anti-neutrinos can contribute. So whatever the solar flux is, we just don't care. I mean, they can induce some electron recoils from time to time, but they will not correspond to this prompt and delayed time sequence that we expect from the positron and the neutron capture. Okay, maybe I can answer later in the talk. But yeah, when you are deep underground like that, one big piece of background that is really suppressed is the one induced by the cosmic rays. And then they took also very great care about the radio purity of all the materials to lower the activity. How can you deal with that? I don't know if you can go into too many details, but background from two gammas, for instance, you have one gamma and then right in the good time, another gamma that is consistent with the energy of a neutron capture. That could be a fake event. But if you look at the distribution in time of this kind of event, these are accidentals. They are basically flat in time. It's an exponential with very, very long decay time. One over the frequency of your background. So you can measure that online. When you look for a neutrino, instead of looking for the neutron capture just after the prompt, you look for it 10 seconds after or 10 seconds before or one second or 12 seconds or whatever, you can do that a thousand times and you just measure your accidental background online very accurately. And then this is subtracted statistically from your data. So if the rate is too high at the beginning, your kind of bad situation, I will show you the other data later in the talk. But you can subtract it, of course, still, but you will be spoiled by the statistical situation of your large background. And for the muons, there are all kind of processes induced by muons and there are techniques to select that from the neutrinos. But it's true that there are very few experiments that have only those experiments that are looking to one or two reactors. Actually, they can have sometimes both reactors or the single reactor off and then you measure the actual background, which is very nice. But that's a very rare piece of information. Yeah, so that's the second oscillation pattern that was measured laterally at the reactors. This is actually a French concept that was developed initially by the double show, the double show collaboration. So the show site is in France, close to the Belgium border, and we have these two powerful reactors. And the idea is to put two identical detectors underground, one one kilometer away and another one close to the reactors and just compare the measurements in both detectors. This one is measuring the neutrino spectrum before it starts oscillating. And this one is just at the maximum of the oscillation. And in this case, you don't care about what the reactor operators are doing, what is the power, what is the composition of the fuel. They are measuring the same flux at the same time, and you just take the ratio to look at your oscillations. And that's the kind of, so that there are also experiments in China, diagram experiments and in Korea, with the Reno experiment. I show you here the most precise results so far. You see the kind of, that was the last mixing angle we were missing, the CETA-13. We were missing it because it's kind of small. The other one, they are close to maximum, 45 degrees. That's the maximum mixing you can induce. This one is close to 8 degrees. And so now that's the kind of accuracy we can put on these measurements. And you see the data here. This is one example of, so that's the configuration. There are six different reactors here close to the, close to the sea. And they have several sets of detectors monitoring these reactors and one set of detector far away to compare with the near detector. That's the measurement in the far detector setup here, compared with the prediction with no oscillation. And even they do not need the prediction, they just take the ratio between this data and the data in the near detector. And that's what they get, this nice oscillation curve. And if you, so this is versus energy, if you combine your data in L over E using different combination of detectors and energy, there you have this nice sine curve with the distribution of all the data. Okay, so that's the situation we have now with reactors. The Camelon point is somewhere here. They really have seen this big deficit of neutrinos due to the one set of mixing parameter. And the other three experiments have seen also this other regime of oscillation. And if we look further at this curve, that's nice here. We do not have any oscillations. And I will show you later, there are many experiments that were performed between 10 and 100 meters away from the reactor. And they were all flat, no sign of oscillations. So that's the idea. Since now we are really entering this era of precision measurement. It's not a matter of saying whether neutrinos exist or not. We are doing really precise measurements, percent level measurements with the neutrinos. So we have a kind of mature technology to deal with these kind of fantomatic particles. And so that brings the idea to apply for the first time neutrinos to some societal topic like the surveillance of nuclear reactors because it's an average point. As I said, when the neutrino interacts in the detector, you do not know from what reactor it is coming. So they accumulate enough statistics and they compare with the prediction where everything is average from all the reactors. It's like for all the reactors, you have the history in time. You have the poor history and the fuel composition. So you have the prediction versus time. And you are sampling one or two neutrinos per day from each reactor. So if you wait for 10 days, you are totally lost. You do not know what you are doing. But if you have thousands of neutrinos, then you can see the mean effect of the oscillation with respect to your mean prediction of the sum of all the reactors. So I could have put some error bar here if you want. But that's the mean effect. It's just for a kind of pedagogical plot because this curve do not correspond to the prediction of this experiment because of all these bearing effects. That's right. Okay, so just my only message here is to point out that we could have this application a short distance from a reactor. Why short distance? Because the idea is to discuss with the IAEA people and possibly provide a new tool for the surveillance of reactor. So we don't want to bring Camelang close to any reactor, 1000 tons of liquid scintillator. So it has to be something small like one cubic meter scale, few meters footprint, easy to install. And so if we go close enough to the reactor, it's like a neutrino camera, we could have enough signal to follow the operation of the reactor. So that's what we can say about the evolution of the nuclear reactor. As you know, this is the beginning of one cycle in the life of the reactor. We just replace one third of the fuel by fresh fuel. So when the reactor is started, it is dominated by fission from uranium. We still have some plutonium in the core that is also participating, contributing to the fissions. And as the reactor operates, uranium is burnt while plutonium is produced by the neutron captures and contributes more and more to the fissions up to 50-50 at the end of the cycle. So as we said in the previous slide, when we are close to the start of a cycle, the expected spectrum is close to this blue curve, this red curve of uranium 235. And when the reactor operates and we go further in the cycle, we will decrease and move toward this green curve while we never reach it. We are never pure plutonium in the pure plutonium regime, but this curve is evolving from the red to the green curve. If we put numbers on that, it occurs that one fission of uranium emits less energy than one fission of plutonium. That is not negligible. It is a 5% difference. So if you want to operate your reactor at the same power, you need 5% less fission of plutonium. So you will have at least 5% less neutrinos just because of that. Then looking at all the fission products, the number of neutrinos per fission is also lower for plutonium. And the mean energy per fission for the mean neutrino energy per fission is also lower. So the interaction cross-section is lower. So all the factors accumulate together. So if we compare pure, let's say 1 gigawatt produced by pure uranium 35 fissions compared to 1 gigawatt from pure plutonium 39 fissions, the ratio is 1.6 in the neutrino flux that you expect. And so if we follow the history of the reactor, that's the kind of curve that we expect. This is a simple simulation for a 1 cubic meter detector with 50% detection efficiency. It is set 25 meters away from a standard PWR reactor, let's say. We assume that the power is constant and because of this evolution of the core, the neutrino rate we detect is decreasing in time. And when the reactor stops to change the part of the fuel, one third of the spin fuel is replaced by fresh fuel pure uranium. So this is 1 or 200 kilograms of plutonium that is removed from the core. The reactor restarts at the same power but we do see the jump in the neutrino rate because we change the composition of the core. And that's the idea of the non-polypheration. Of course, you see when we remove 200 kilograms of plutonium, we do see it. The question is how far can we go till it's really relevant mean of surveillance useful for IAEA? And of course the principle is the... So it's really this control of the plutonium content. As you know, I guess there are two ways to produce weapon-grade material, either at the level of the enrichment. If we continue this process to very high level of enrichment, of course, the pure uranium, that's one way to build the bomb. And the other possible material is plutonium 239, which is not existing in nature, but that is produced in the reactor. And that's the idea of this surveillance by the neutrinos. Since we do produce plutonium and this is chemically different from the rest of the fuel, it can be separated. It's not an easy process, but that's one possibility. So the idea of the monitoring by the neutrinos is to survey this stage and to know what amount of plutonium is in the reactor. And if we do see any submit change or some possible diversion of the plutonium associated to a change in the neutrino rate. So we have two engine specifications before being of interest for the international agency. First, they want something compact, portable, possibly operating close to surface, all that we don't want for a neutrino detector. You really want to have huge targets and deep underground, right? So that's the first challenge. It has to be safe, of course, which is okay, I think, for this kind of application. We are outside the confinement vessel, outside the core building, and there is no big impact we can expect on the safety of the reactor. It has to be simple because the idea is to avoid onsite inspection. This is the very costly monitoring of reactor for the agency. So neutrino, that's one asset of the neutrinos. You have direct connection with the fission process inside the core and you can still have a remotely controlled detector sending encrypted data to the agency in Vienna. Okay, so the big challenge still is that we have to simplify our state of the art neutrino detectors, make it smaller, cheaper, and we have to operate this less performing detector in a very large background environment. So that's two opposite challenges we have to deal with. So we have to say a few words about the background we have to fight for against, I would say. Since we are using this process of detection with the positron and neutron in the final state, we discussed a little bit following the Dana's question. So one first class of background is the accidental background. One gamma from whatever radioactivity, if we are close to the core it could be a leakage from the core itself, and one neutrons or another gamma that with energy compatible with the signal of the neutron capture. So what can we do against that? Well, we have to shield the detector with polyethylene and lead to absorb all this radiation from the environment. And the remaining contribution, as I said, we can measure it using this off time window technique and subtract it. But we have to develop enough shielding around our target to first reduce this contribution to acceptable levels. And the other kind of background is what we call the correlated background. This time, this is a single particle that will induce the whole process. Here, this is one muon from the cosmic ray, shower. If this muon interacts close to the detector, in the sailing or in the shielding around the detector, we don't see the muon itself, but it can produce by spallation reaction. It can produce a fast neutron, fast enough to bounce several protons in our target, and we do see the recalling protons, and we interpret that as a prompt signal. This has been making the positron. And then the neutron will thermalize, diffuse, and get captured. This is the second part of our signal. There's nothing we can do against that, except the only difference is that we have recalling protons, not positrons, so we could use some per-shaped discrimination capacities of the liquid. What can we do against these backgrounds? Well, if we stay on the parking lot, there is, up to now, nobody could achieve a neutron detection. The natural cosmic background is just too high. We still need to be, like here in this room, with one or two sailings above our head. That's enough to calm down the lowest energetic showers. We use, around our detector, we have muon vitals to try to tag all the muons that are passing nearby, but it doesn't cut everything, but at least part of it. And as I said, the last piece of information we can use is the pulse shape discrimination between a recalling proton and positron. The shape of the signal can be different. At the end, the remaining contribution, we can still subtract it if we have a reactor of data, which will occur in this case, because it's one detector very close to one core, so each time they refuel it, we have a reactor of data. I will show you interesting data we have from previous experiments. The first one, the pure near experiment is from the Russian guy. Actually, they were using the Rovno reactor. It was a very nice site. They were only 18 meters away from the core and just underneath underneath the reactor, so huge protection against muons, cosmic rays. And still, they had a good piece of shielding between the core and their detector, so the background was not that high. And it was a quite simple detector. You see just the mechanical structure here with all the PMTs that will be plugged in these holes. And inside, there is one cubic meter of liquid scintillator doped with gadolinium. And that's the kind of data they took in 88. You do see that the neutrino red follows the operation of the reactor. You see the on-off periods. And more impressive, so they were able to accumulate 174,000 events at that time. That's the kind of spectrum that they get. And more interesting, if you look at the red versus time, they observed this kind of decrease of the neutrino red because of the accumulation of plutonium versus time in the cycle. This is a one-year cycle here. And because they have this so large statistic, they could even look at the shape variation of the spectrum. When we go from uranium rich to plutonium rich, there is a slight tilt of the spectrum, which is quite really difficult to see. But they were able to see it with a nice significant taking the ratio between the end and the start of the cycle. So that validates basically the idea that we do have information from the neutrinos. There was an American experiment done in the early 2000. They were close to the Sanonofre reactor. It depends on your detector. That's also a compromise. This is measured. This is measured. That's the energy spectrum. Sorry. This is energy. This is energy. Now, so that's a compromise. Here you see you have many PMTs all around. Okay, so it was back in the 80s and on that detector. I don't know what was the safety issues. It's not, maybe not very easy to put electronics close to the liquid with leakage possibility at the bottom of the vessel and things like that. So now we are trying to develop concepts with everything on the top, something simpler, safer, but each time you lose energy resolution, light collection, things like that. So it's always a trade-off, a compromise between safety, simplicity, and this kind of very nice resolution. So that's the kind of extreme example in the US. They would really want it to go with a very simple light detector, something that you could easily bring in a truck, put close to the reactor and just leave it alone for one or two years. And that's what they did at the Sanonofre. They were using this so-called tendon gallery. They have large cables that are going through this dome with big screws that they tight from time to time just to keep the whole thing well stable. And that was, this gallery is all around the building and it was large enough to install this detector. So you see the scales is one meter by one meter. This is the target. It is just four cells next to each other with two PMTs on the top filled with liquid scintillator points. The shielding, these are shelves, both at Kmart or whatever, and with aquariums filled with water. No lead, no shielding other than that. And big plates of plastic scintillator around to see the muons going nearby. So it was very low cost, very simple design, and the site is nice too. Because they are well protected by the building against the muons, not very deep, but still fine, and they are far away from the reactor. So no gammas from the reactor, nothing. And that's the kind of data they could obtain. Again, you see the history of the, you see the decrease of the neutrino rate versus time. So they did some statistical studies within four hours, they can say at a 99% level that the reactor stops, maybe not very useful, but in case of undeclared stop to divert material, that's maybe a useful piece of information if you do not have other surveillance. So reactor stop, this is the level of background with respect to signal, that's quite nice. And then when they start again, you see the jump because they removed 250 kilograms of plutonium in that case for the reactor. Now the drawback, if you look more in details, one point is 30 days, okay? This size of error is 30 days, one month. And this is 200 kilograms of plutonium. So this is the limitation of this measurement. It's so simple detector that they lose, they lost a lot in the detection efficiency. Only 10% of the neutrino interacting in their detector were useful or able to be treated by the analysis. And the last experiment I want to talk about is the new seafar. So I was involved in this project at CES, at CES HACLE. There we had the possibility to install a small neutrino detector. Again, that's one, that's close to one cubic meter vessel. We have acrylic buffer here to decouple the liquid from the PMTs and just two rings of PMTs on the top. This is a double-walled vessel and then it's very safe and a very stable detector. And we are able to install that just against the pool, the wall of the reactor pool, seven meters away from a 70 megawatt reactor. So I think that's the closest measurement ever tried from a reactor. What we are trying to do, so of course there the background of the reactor is huge. When you enter the room you do see on the dosimeter the effect of the gamma ambience. It's not, you can stay in the room but just your personal dosimeter seats. So I can imagine if you put a germanium detector you see a lot of things. So it's a very big challenge in terms of background but we learned a lot with this experiment. And also when developing that we are trying to use kind of commercial or nothing really fancy so that we, it's a kind of pre-industrial stage for further deployment with several copies of this kind of detector. So as I said we had a very large background so that's the kind of shielding we put around the detector. These are the plastic scintillators to veto the muons passing nearby. Then all walls were covered with polycylene and at the outer layer is covered with lead bricks. And we had several surprises as neutrino physicists. I didn't know that the water of the primary, the primary loop in a reactor could be activated. It's sorry but it looked obvious to all the people in the reactor when we discovered that but we just didn't know. So there is this nice process when the water goes through the core. You have Np reaction on oxygen 16 that produces nitrogen 16 which is a beta emitter with a seven amelie gammas and a few second lifetimes. And in that reactor in Osiris they wanted to be able to work on the pump and the cooling system while they were operating the core. So what they did is to build a kind of chicane where the water of the primary circuit do several turns back like it takes a minute to exit this big vessel. Time for all the activity of nitrogen 16 to decay. So everything was decaying next to us in the room next to us. We didn't know about that. So when we turn on the experiment the first time and we look at the barycenter of all the events it was pointing there and the reactor was there. And that was kind of surprising. So we had to add several pieces of shielding. We did what we can but at the end at the end that's what we got for our last data. This is the kind of background spectrum when the reactor is off. These are number of photoelectrons. One MEV is about 350. This is two MEV, etc. And when the reactor turns off that's what we observe just in single rate. We have something like 300 neutrinos per day and these are counts per second. So it's a large background. This is 0.1. This is 1 hertz on top. And the bad news is really in this high energy window that's where we expect the neutron capture on gadolinium. This is an 8 MEV signal with energy escapes, resolution effects but still we expect the high energy windows where we want our neutron capture signal to be clean. We know that that low energy where the prompt signal is there will be plenty of background but when we ask for the neutron partner we wanted a very clean energy window above all the radioactivity. And that's not the case here. We do have background in the high energy windows for the neutrons and in the low energy window for the prompt for the positrons. So in this situation your background scales with the square of the reactor power not the simple reactor power because you have it in both windows. And that's basically what we observe. So 300 neutrinos per day but 4,700 candidates per day. So 300 are neutrinos. All the rest is background. So that's the kind of background we have to deal with. And you see here what I was talking about. This is the time between the prompt and the delay signal in microsecond. So accidentals, they are rather flat as we expect and you see that you can measure them pretty well. This is a good correspondence between this off time windows event and our candidates. So we can subtract this blue offset and then in the rest of the events above you have the correlated events with the nice exponential dependence which is the time for the capture of the neutron. And in that we have all the correlated background from the muons and the neutrinos. And if we look at the muons, well the idea was to whatever the background we have we measure it during the off period and we subtract. That's not that easy because if we look at the muon rate in our detector it clearly correlates with the atmospheric pressure which makes sense. It's a particle that interacts in the first layer of atmospheres and then depending on the atmospheric pressure you have more or less attenuation before reaching the earth. So we have to make sure that the mean pressure or mean muon rate is the same during the on period and the off period. So we have to look at this correlation dependence. It was more or less okay and we had to correct for this effect. But at the end that's the signal we have. This is just the neutrino what we call our neutrino candidates. You see here that trying to fit an offset here is just compatible with zero, very nice accuracy. There is no sign of remaining accidental background. The time, the decay time of this exponential is just the neutral capture that we expect in our liquid. So after the subtraction of all the backgrounds that's our neutrino rate per day. Kind of same plot than Rovno. You do see the larger error bars because of this large background that is spoiling the statistical accuracy but still we do see the on and off periods. And now what can we say about the evolution of the fuel in Osiris? Well that's a simulation of the Osiris cycles. We assume that we start with a fresh core with pure uranium. In Osiris the uranium fuel is enriched at 20 percent. So it's really dominated by the fission of uranium. And moreover the cycles are only 20 days long and they refuel very often. So that's why we need seven or eight cycles to reach the equilibrium. But then after that the variations are really, really very, very small. And the impact on the neutrino flux is below one percent between the beginning and the end of the cycle. So there is no way we can look within our iron bar. We can look at such a small evolution. Our data are compatible with flat and there is no way we can look at these sub-person slopes here. What we try to do is try to discuss this sensitivity to the plutonium content in the framework of what they call the, what is the name of this program again, the PDMA, the plutonium disposal. The U.S. and Russia have big piles of military-grade plutoniums that they have dismounted from the nuclear weapons and we have to destroy that. And there is an agreement now between these two countries to burn that in fast reactors so that after the irrigation in reactors part of the plutonium is just burnt and transformed into efficient products. And the remaining plutonium will be mixed with all these other isotopes which is not usable anymore for military purposes. So we kind of studied this scenario here. We assumed that Osiris was now burning MOX fuel and we took our accuracy within these large background conditions, okay, what we have now. We have measured the rate for Osiris running with uranium only and by simulation, so we take our iron bar and by simulation we put more and more plutonium in the core, look at the variation in rate. And we see that we reach the two sigma, the 95 percent level deviation after 1.5 kilograms of plutonium roughly, which is, so that's a small quantity of plutonium because it's a small reactor of course. That's the advantage of looking at the research reactor. If we look at the fraction of total fissile mice, it's about 10 percent. So that's the kind of, with this kind of experiment, with this kind of background that is really dependent on the site, okay, for the U.S. experiment it was a lot, a lot smaller background. But here at Osiris, that's the kind of accuracy that we could reach. Okay, I think I have to stop soon so I will just skip a lot of stuff at the end of the talk. So where do we are in terms of reactor surveillance? Okay, there have been several experiments that do demonstrate that there is some capability to monitor the plutonium content with the neutrinos. Still it's clear that we need to demonstrate further sensitivity and deployment capability before it is of any use for the agency. For poor reactor, they already have a bunch of monitoring tools. They have seals. They have cameras on site with inspections. It looks difficult to divert material from these standard commercial reactors. They may be, and for the research reactor, they have also items to look at the temperature of the primary loop and follow the history of the power. And we have to complete with kind of low cost instruments. There was a kind of niche for what they call the bulk or online reflowing or online reprocessing reactors. There are several ideas like the people bed reactors or the thorium molten cell reactors. There it's more difficult to control what's going in and out because it's a continuous reflowing. And so having a global picture of the operation with the neutrinos could be a nice device. But because of this online reflowing, what you see is that the neutrino signal is pretty stable actually. And even if in the scenario of a diversion of plutonium that you step by step you start diverting some plutonium the way you refuel your reactor, it's very difficult to have a big imprint in the neutrino signal. So that's one limitation also for this kind of reactor. And now the, so that was suggested actually by the IAA people that we met. This idea to monitor the disposal of weapon grade plutonium. And there it could be rather simple. It just makes sure with the neutrino that okay, we detected that amount of neutrinos, the integral. So there was irradiation for sure. And if we compare that with the power, it means that actually plutonium was burnt, not only uranium. So it's just looking at integral numbers. It's a lot simpler. And okay, that could be a nice piece of information. And to go further actually, we are working on new detection techniques. So we have a nice synergy with fundamental research activity. I wanted to present at the end of the talk, but I think I will skip that because I do not have enough time now. Just one word about nuclear data because I think you have, you had many talks in this school about nuclear data. For all these experiments, oscillations and also looking at the reactors, we need predictions and pretty accurate predictions. And one problem we had when looking back at this prediction and trying to go to more accurate data is let's consider first the conversion procedure. So I remind you this is the, we take the total electron data and we fit the high energy part with the virtual branch. Again and again, this iterative process, we have a sum of 30 virtual branches and then we can convert to neutrinos. And that's the expression of one virtual branch. So there is an endpoint and a normalization factor and this is fitted on the data, the endpoint and the normalization factor. Fine. Now, Fermi function, these are all the coolant effect because you are escaping from a charged nucleus. What is the charge of a virtual branch from a nucleus that doesn't exist? So we have to come up with some approximation looking at the nuclear databases. We do see some correspondence, some correlation between the endpoint and the z of the decaying nucleus. So we try to fit these dependencies and have a kind of model so that when the fit is going to higher endpoint, it will move also the z. And then there are more complications like this shape factor, forbidden transitions. What is the forbiddenness of a virtual transition? By default, we set that to one and suppose it was an allowed transition. And same thing here for all these corrections. There are several approximations, several effective way, effective way to implement the weak magnetism and the coolant effect. And so what we try is to revisit this green and red points, not this one for the moment, but these two. And we just pointed out that back in the 80s when they provided this reference spectra, they did a kind of crude job here that could be improved keeping the same idea, the same formalism, but just refining these approximations. And that was enough actually to move the prediction. So that was the old prediction. This is the deviation with respect to the reference spectra. And we discovered that just improving these approximations, because at that time they just thought that it was good enough and I think that was the case. Now we want more and more precision. We just revisited these two approximations and the consequence is to predict a slightly different shape, the slope, but the main effect is a 3-4% increase of the prediction. So we published that in 2011 and that was a few months later checked by another theoretician in Virginia Tech and confirmed, basically. And so we were left with this funny situation that the original motivation was to provide accurate prediction for the double show experiment. That's it. And we discovered that it was impossible to improve the error bar, but just at least correct this bias. So the prediction moves up by 4% with the same error bar. That's the first thing. And then we say, but wait, wait, wait, now we have a new prediction and so we can compare that with all the previous experiments that were using the old prediction. What's the situation now? So reading back all the papers, of course, they were using this interaction cross-section from inverse beta decay. And the way it is calculated, because we need it to compute this prediction, we need both the emitted spectrum and the cross-section. And the cross-section is kind of normalized by the lifetime of the new of the neutron. That's a way to absorb all theoretical complications, renormalizations, stuff, etc. And this parameter has evolved in time, constantly decreasing. If you look at the publications, that's kind of exponential dependence on the of the neutral lifetime versus time. I don't know if it's a psychological effect of several experiments or what. So just taking the current and more accurate value, it's another shift of plus 1.5%. So at the end, the predicted spectrum is 5.5% higher. If you use the conversion method, there is no discussion. This parameter has to be updated to the current value. And this conversion procedure is better when you have better approximation. So that's the situation we have now. Again, this is the neutrino flux versus distance from the core. This is the Camelon point. These are the few points for the CETA-13 measurements. And these are all the points that were done between 10 and 100 meters away from the core. 20 measurements. Within the aubards, they were all in agreement with the previous prediction, but now the prediction moves up by 5.5%. So we are missing 6.5% because the average of all these measurements was 1% lower than the prediction before, but that was within the aubards. So that was minus 1% plus the 5.5%. It's a minus 6.5% deficit with respect to prediction. So maybe something is wrong with the prediction. That's one piece of work we are dealing with now, but so far there is no firm proof that this is wrong. Or there is another way to explain a deficit of neutrinos. Look here, deficit of neutrinos, deficit of neutrinos, and what happened? Oscillation. They were oscillating to other flavors. So maybe there is a new oscillation. The problem is that when you look at the mixing matrix, all the possible mixing are there already. There is no possible new mixing within the neutrinos that we know. So if we want a new oscillation at very short distance, we have to invoke a new neutrino. And since we know from CERN and from cosmology that there is only three active neutrinos, if it is new, then it has to be sterile neutrino, which is an old idea by all the theoreticians. There is a nice place to plug that in the standard model. So that's the kind of contour. This is the mixing angle and this is the mass splitting. It turns out that there are other experiments that had kind of strange anomalies at two sigma levels. You cannot really conclude about that. Two sigma levels in solar detections, things like that. And nobody believed in that because the reactor data were just in agreement with the prediction. And now we move the prediction and the reactor deficit is an agreement with all these strange anomalies that you had before. Okay, three sigma levels. It's not a proof. It's just tantalizing if hint. So there is a whole program now in the world trying to look for these new neutrinos. And let's see, I have to stop here just to tell you that that's the kind of two efforts that we are putting now on this topic, prediction and sterile neutrinos. Could it be some biases in the conversion of the spectrum? There is this article by Anna Hayes that is pointing out possible sources of error in the conversion spectrum. We are working now on these two possible contributions, the forbidden decays. I was telling you that when we fit with this virtual branch, we use a load decays. But when you look at the nuclear data, there is a lot of forbidden decays. So this is the relative contribution of various decays, a load, first forbidden with delta j zero, first forbidden, et cetera. And you see that actually the allowed decays they dominate at the very low energy and then it's all about forbidden decays. And we use only virtual branches of allowed type. Is it a bias in our procedure? So we are working on that. I have no time to go into too many details. The fact, the new input that we expect on that topic is that now we have some codes that are able to compute the wave functions for all the nuclei that should be able to cover most of the efficient products. They are trying to predict the lifetime of all the beta transitions, which is a very difficult process. And you see that they have quite nice results. They are able to put more and more nuclear effect in their calculations. And what's nice is that we should have at least new information from all the efficient products to really see what are the possible changes in shape from these forbidden decays. And once we have this input that will give us the reasonable range of variations of the shapes, otherwise right now we do not have enough information from the nuclear community. Once we have this information, we can plug that in the prediction and see how it moves and what's the error. And the other old idea is to compute the spectrum from scratch using the nuclear databases and put the 10,000 branches, sum them together. And we know that one of the big bias in the nuclear measurements is this so-called pandemonium effect, where the way we measure the beta, the way people measure the beta schemes before they tend, there was a systematic effect that tend to put two more weights on the high energy transitions, skipping all this quasi continuum regime in most transitions. And this is being corrected for this short list of nuclei that do contribute a lot. You see at high energy, you have a few nuclei that contribute to a large part of the spectrum and people are re-measuring that with total absorption spectrometer techniques. And that's the kind of correction they are putting nucleus after nucleus. You see here the beta strength distribution from ENSDF. You see with much power here at high energy. And now with this new technique, everything brings down to lower energy. And that's the kind of effect you can have at high energy with one change for one single nucleus. This is rubidium-82, 92. You change only this nucleus from red to blue in the calculation of the total spectrum. And at high energy, this is the high energy part, you do see some change. And now they are reaching a point where the remaining contribution of unknown or not well measured nucleus is getting smaller and smaller in this regime here. At high energy, it's still difficult, but here we are starting to have a bunch of real good data and maybe converge toward the semi-curricid and the conversion procedure. One problem that remains with this method, even if you have the good information on all the transitions with the good shape, the good working ratios and everything, you need to plug that with the fission yields because you need to know what is the abundance of each fission product in your reactor. And then you have to use the evaluated fission yields from different databases. And right now people are just pointing that out in the literature that if you use JEP or NDEP databases, that's the kind of red or blue curve that you can have for the prediction of the spectra. So this summation method is also sensitive to that. This is not the case for the conversion method, and that's another piece of problem that we have to tackle. Okay, I will just keep the search for sterile neutrinos just to tell you several experiments and nice ideas to go further with small detectors close to reactor. I will maybe just point out one idea that I think is pretty neat. It's this solid experiment that wants to look for the sterile neutrinos very close to a reactor. And the idea is to build the kind of matrix of small cubes like that. This is 5 centimeters by 5 by 5. This is plastic scintillators with optical fibers, web-level shisters, fibers that can extract the light on x and y direction. And one face is painted with a layer doped with a lithium 6. So when this reaction occurs, the inverted decay, the positron is seen in the plastic scintillator. There is light emitted and collected by the fibers. And the neutrons will diffuse and has a good change to be captured by the lithium 6 and produce alpha and tritium with very different particles interacting the detector. And you see that they have a huge separation between neutrons and positrons. So for instance, accidental gammas, gamma plus gamma, it won't work here. They will cut it online. So they have several powerful, and you see also the topology. This is the positron and the neutron energy deposit in closed cells. If you have one there and one there, you know it's an accidental. So there are also topological cuts possible. And so they are working on that. And that's the main motivation is for looking at this sterile neutrinos. But this is clearly one technique that could be used to develop further small detector close to surface for nonproliferation. Okay. And I will stop there. So we've seen many things about neutrinos. It's connecting many different topics, nuclear decays up to cosmology. We are now entering, as I have shown, with the state of the art reactor experiment, a very high precision era in the measurement. So that was the measurement of the last mixing angle. And also increased precision in the prediction led to this so-called anomaly. So we have a mature detection technology that allowed to make first experiments with the motivation of surveillance of reactor for the nonproliferation. And so there are quite a bunch of effort in various countries now to go further, especially I have shown this new detection technology. And the last piece of information while we are working on all this information from the nuclear databases, the comparison of measurements and absolute prediction is now reaching a high level of accuracy. And that provides a quite sensitive probe to the quality of nuclear data. And we have identified some key issues. And there are experimental programs and collaboration with certifications to make the things move. And I guess it would be useful to so many people to have this corrected data. And whatever is the prediction right or wrong is the neutrino sterile there or not. Within the next three years, we will have up to five or six different experiments in the world looking at the spectra very closely to the detector. And so we will have, as we will see, big patterns of oscillation. So that's noble price, sterile neutrino is there. Or we don't see anything, but we have at least a measurement of the shape of the spectrum and we can have new inputs about this accurate prediction. Thank you.