 During the coffee break we'll have the first poster session, although the weather is not so nice, we're still having the coffee break on the terrace, anyway it's covered, and the posters will be on the boards which are nearby, you will see them, and that's it. Okay, we have the fourth lecture by Pierre Oulio about cosmic rays. So let me start by wrapping up briefly what I was rushing on in the last lecture. So the last lecture got a little bit technical and I tried to keep this last one a little bit more physical. So what we were discussing yesterday is that we have a pattern to describe the propagation of cosmic rays in the galaxy, and this pattern has some physical insight from some effective features that we imprint into our propagation equation, which is of the kind I wrote yesterday. And for instance, what you do really is to take all these features together, you have a set of parameters, and you fit your parameters to the observables. I was trying to sketch yesterday that there is indeed some physical insight. So for instance, when you are in the limit in which spatial diffusion matters, we were defining some time scale for diffusion that in the simple model I was explaining yesterday is something that just goes with this overall scale of the galaxy squared divided by twice the diffusion coefficient, which is some function of rigidity, some increasing function of rigidity. How comes that it is an increasing function of rigidity? Well, you see, if it is an increasing function of rigidity, then the diffusion time scale for confinement is getting smaller for larger rigidities, and that's natural, right? So if you have a particle that has a larger, larger radius, then for that particle it will be easier to escape from the galaxy. So instead, if we were putting up some convection effect which is, say, gluing the cosmic rays to the plasma, and the plasma has some overall bulk motion, we were yesterday sketching this bulk motion through some convective velocity U, and then you have some time scale for convection, which if you look back at the grammar that I wrote yesterday, it's just something that scales instead with the scale length for confinement divided by twice this U parameter, okay? And this is some kind of bulk effect which is likely not to depend strongly on energy, and then you see when I compare this one to this one, it may be the case that I started low energy with this one, which is smaller than the diffusion time scale, so convection is the dominant effect, while at high energy I could have the turnover in which diffusion matters much. And then what I was showing yesterday is to look at this plot, this secondary over primary ratio, and the secondary over primary ratio is sort of our cosmic rate clock, okay? So we can try to match these time scales on that plot, and then what you see at high energy is effectively this scaling of the confinement time, the diffusion time, which is decreasing going to larger energies, and low energy you have some kind of flattening that you can, for instance, describe with a convective effect taking over, or actually if you are going back to the diffusion equation, most solutions would use a reacceleration term that as well gets more efficient than spatial diffusion at low energy. And then the game is what? The game is try to do this clock setting using different clocks. And the different clocks, one as in mind, are these different patterns of primary or secondary over primary, so I use this clock here, for instance, to scale the information that I have in my effective equation as some parameters for the diffusion coefficient, and then I check out that the same kind of effect should work for this other set of secondary over primaries. So that's what we are going to do in the near future. I cannot show you the plot, which is the analogous to this one for this set of elements, just because this set of elements being heavier are tougher to separate clearly in a magnetic spectrometer, AMS is doing that. So there's another ratio secondary or primary or secondary component with respect to a primary component which we know, which is the one that is imprinted in anti protons. So we believe that the bulk of the anti protons that are in this plot are some secondary component. So as you know, there's not much anti matter in the universe, at least not in our visible universe. There's a component in cosmic rays that's roughly the level of 10 to the minus 4 compared to the proton flux, that's the next slide. But how that comes from? Well, that comes from a process that is not really perfectly analogous to the one we had before. It's a cousin because it's a process in which you still have a primary proton eating the interstellar medium, but then you don't have a breaking effect of the primary into the secondary while you have a stimulated production of a secondary anti proton on this background. For instance, in this process it would go like one anti proton plus three protons to conserve by your number. Okay. So it's a process in which since there is this in elastic scattering, there's a rich shuffling of energies. And in particular, it gets suppressed from the kinematical point of view to generate anti protons that are very low energy. And then there is essentially a peak in the anti proton production around one GV. Okay. Lower than that is kinematical disfavored. And that's about the peak you see here. Okay. This is after propagation. So you have an injection spectrum that peaks around here and then it gets depleted a low energy. And on the other hand that high energy, this is supposed to be the secondary source reshuffled by the fact that you have been the propagation of the secondary. So the primary component is, as I said, the incident proton flux. So when I take a ratio of anti proton over protons, then this is the plot that AMS preliminary released last April. Actually, this result hasn't been published yet. So probably they are collecting more statistics or refinement, refining the analysis. As I told you, the ratio of anti proton or proton is 10 to the minus 4. But you see this plot looks kind of different from this one. Okay. Here you had a clear decrease in the ratio. Here it's compatible with lean flat. So there was some excitement related to this. In particular, the way they presented this result at this conference. The only slightly puzzling thing is that as I said, this process is not really the fragmentation process that we were discussing before. It's a process which has a more complicated kinematics. And also the anti proton flux seems to be not a simple power law even in this energy range. So it's roughly a power law going like e to the minus 2.7. So would it be like that? It would be just be flat. Don't look at this part, which just depends on the fact that we are measuring these flux in the heliosphere. So there's some reshuffling or low energies. But this high energy here, it's sort of compatible with being flat. But if you look in finer details, there is some kind of structure with an hardening of the spectrum. So once one takes into account all these issues, okay, that's the same. You get a prediction for this ratio in models in which you have a compatible, you are reproducing nicely the boron over carbon, our cosmic rate clock. So you have a fiducian model in which you would tend to go down. But you can reshuffle this fiducian model in such a way that it can even be compatible with a flat thing. We are here at energies around under GV. Any reasonable model starts really to bend over here. So if we did find that the antiproton flux is keeping flat, then that could be a signal for some kind of antiproton primary component. So that's the question mark. And this question mark, as you heard from Tracy, may have an answer that is a primary Darmatter component in the sense that Darmatter is a democratic. It would generate the same amount of proton and antiprotons. So even if you have a tight, a small signal, if you compare that to the proton background, it's enough to go at the level of 10 to the minus 4 and show up a little peak here. So that we will see. Then I want to come to this other plot that I told you generated some excitement recently. Well, a few years back now. And for this, I have to introduce another time scale. So the time scale that depends that is important for the electron-positron business is not only those that I've wrote in the blackboard so far, but there are energy losses which gets into the play. So the propagation equation that I have to solve is the one I wrote yesterday, plus an extra term, which is the term that is telling you that leptons like to lose energy. And this continuous energy loss term, well, we learned what are the main effects on high energy electrons. They are inverse Compton and the synchrotron, which as I was telling you, they are the same from the point of view of QED. And if you remember, they both scale with the energy of the electron squared. So there is some loss term with some coefficient in front that scales like E squared. So for short, I just indicate this as B of E. I can again find in a simple setup in which I just take this to be not a function of position. And this, just to be again a function of energy, I can write the green function for this equation. In particular, it's more convenient to write the green function for the combination B of E times N. And that just reads out like green function of X, P, X naught, P naught. This is just something that goes like 1 divided by 4 pi lambda squared to the three halves, times the exponential of minus this X minus X naught squared divided by 4 lambda squared. So it's roughly speaking so it's in the same form where you have to recognize what is the propagation length and the propagation length in this case is just given by a combination of the two competing effects, this space diffusion and this energy loss. And that's the integral between P and P naught in the P prime of the diffusion coefficient of the function of P prime divided by this energy loss term divided by B prime. Okay, so you have that the solution will be just this one over B of E, the integral in the three X naught, the P naught of the green function times the source function. And what you recognize is that on top of this diffusion timescale, there is a timescale for energy losses which is roughly going like this E divided by B of E, the modulus of B of E. Okay, you combine this to that and put in a power of energy here, you find a length squared. Okay, so that's, we find it here is something that is case like E to the minus one. This is something that we found yesterday is something that is case like E to the minus delta with this delta being of order of 0.5 from boron over carbon. So you see this is something that decreases less rapidly than this. And then what will happen is that there are two regimes, there is some kind of low energy regime in which the fastest timescale is the diffusion timescale. Okay, so a low energy you have diffusion of E plus and E minus. At high energy you have energy losses E plus and E minus. So at low energy the electrons and positron propagate kind of far away from the sources. At high energy they lose energy close to the sources. And of course what's low and what's high depends on, okay, how efficient these ambient energy losses are, how large is the diffusion coefficient that you have in your model. So then I can solve that equation and when you plug in the green function. Okay, you have that n of p scales roughly like the source, sorry, n of energy scales like the source function of energy and then there is this timescale. So suppose I solve this in this high energy regime. So the timescale I have to put there is the timescale for energy losses. And then I have to normalize to get the range right. I have to put in a length scale and actually solving the question you find that the length scale is this twice lambda that I wrote there. So if I now trace what is, so this is just of the order, so Q of E, T losses. And this is what, this is of the order of the square root of D of P times these time losses again. So when I compute how this is scaling in energy, I will have a scaling in the source function minus some alpha for this electron component say then I have a scaling like the energy losses. I wrote it there is e to the minus one and in the denominator I have a e to the minus delta times e to the minus one over a square root. So the scaling overall is what like minus alpha minus one plus one half so minus one half plus delta. Sorry, yeah it's plus delta sorry. Yeah, this was the diffusion coefficient. So it's e to the delta. So I get minus delta half. That looks better. Okay. Then if I wrote it like Q to the E, because this would work for a say primary component, right? So this works for matter, this works for electrons. So this is what is accelerated in the same environment that protons are accelerated. I'll try to come to that shortly. What about the positrons? Okay. Again, we don't like too much to have a primary antimatter. So for the positrons, the main source is again a source that probably is connected to at least this first side to a primary source which is it in the interstellar medium and that generates some fragmentation process which at the end of the day generates a charge pion which likes to decay into a muon plus stuff. So at the end of the day into a positron plus stuff. Okay. So the source for E plus is something that will go proportionally to the source of the primary component. So the protons then there will be this effective conversion of protons at the end of the day to positrons following this chain. And then the source, sorry, the source of protons is not the proton source but is the cosmic rays along propagation. So here I don't have to put q of p but I have to put the propagated proton flux. Okay. So that's something that if we neglect the energy dependence of this cross section which in a small energy range is roughly okay. That is something which will scale up with the proton source and then the time scale for proton diffusion which proton propagation which is mainly diffusion like. So this is something that scales like the energy spectrum of protons minus delta. Okay. So when I do compare, when I do use this in finding what is the number density in equilibrium of positrons after propagation that will go like what that is the same formula here with now there is a special here for. The source there so it's the same except that they have to substitute this alpha here with the alpha plus delta. So this will go like e to the minus alpha proton minus delta minus one half minus delta alpha. So if this is the scaling for the electron where here I put it in an alpha of the electron, I take the ratio between the two. The number density of electron divided by the number density of of the number density of positron divided by the number density of electrons. And this would scale like e to the minus alpha of the electron plus alpha for the protons and the denominator. This part goes and I still have a minus delta. So you see if the spectral index for injection of electrons roughly matches the spectral inches for injection of protons, that's roughly what we expect. Then we expect this ratio for a secondary positron component over a primary electron component to be scaling down in energy like minus delta. And you see that you don't need a highly sophisticated software to decide that what Pamela detected in 2009 is not a ratio which is going down, but is a ratio which has an upturn and then Pamela, sorry, AMS in 2013 and updated in 2014 detects these upturns with very high statistics and maybe there is even evidence for flattening here. So the hypothesis that is likely to be wrong in this result is the fact that positrons again are just secondary at high energy. So you wash out this upturn by promoting the source that is as efficient in producing a positron as electrons. So you promote positrons to be a primary component and then if that primary component is jumping in in this energy range over here, you start to see why you are equilibrating from a ratio that is 10% down going to 1%. And so I would expect according to that formula to go like that, if there is a primary component then it replenishes this tail and gets in the more democratic regime of, I don't know, 30 to 40% ratio. Okay, again primary component. Well, you heard again this from Tracy. Two hypotheses that matter. Many question marks. There is the hypothesis that you have a component from Palsars which are environment with high energy outflow of electromagnetic radiation in a highly miniatized medium. So it's an environment in which pair production of electron and positron is kind of possible and then that could be the democratic source of electron and positron. So the matter in transportation is kind of contrived because you really need very efficiently annihilating the matter to get the source that is at this level. Okay. Last half an hour or so, I want to, no, it's more like 40 minutes. I want to give you a flavor for, yeah, sorry. Doesn't change the picture, sorry. I was slapping this one. So we kind of understood in some detail how you get this one. This one is representative of the Grammage in the galaxy. If you just look at the ratio, you are kind of washing out the information for what is the primary spectrum. At least in this case in which you have a breakup of the primary component into a lighter species. What about addressing why should have a structure for this for the primary components? Why do I get power low for primaries? So in that respect, I have to tell you a little bit about what we think is a candidate for cosmic ray acceleration in the galaxy. At least for what regards galactic cosmic rays. What we think is our prime suspect for acceleration in the galaxy is something of this form. This is a picture taken from the Hubble Space Telescope of a so-called supernova remnant. So what's in the sight of a supernova after a supernova explosion? Rarely for a long time. This supernova 1054, it means that this is a supernova that has blown off in the year 1054 anodomini. The Chinese record that there was a bright thing in the sky. This bright thing in the sky has been propagating out of the explosion a shell which is a so-called shock. A shock wave in some hydro magnetic environment. This is a shock wave pretty much in the same way you can make shock waves in a fluid. Although this is more complicated because it's not just a fluid. It's the magnetic field that are attached to this fluid on which you have to propagate out this shock. What is a shock? A shock is a discontinuity in a given property of your ambient, right? You have some, for instance, some density peak which is propagating out into a smooth medium. And while it's propagating out, this peak gets more and more pronounced. It goes and you really get a discontinuity between what is standing behind your shock front and what is standing in front of the shock front. So you have a thin wall which we can just assume it's an infinitely thin which is, say, propagating out with some given velocity is some relatively small velocity, right? You see, you have this stuff that is going on for a thousand here and we still really see it relatively closer to the source. And that's parts of the medium. Suppose that we are taking a patch in this shock front. We can just consider it as some kind of plain shock. We pass the medium into a part which is called downstream and a part which is called upstream. And it's really a discontinuity, for instance, in the density. So there's, let me call one the upstream and two the downstream. So there's a mismatch in the density, row one and row two. There's a mismatch in bulk velocity, U1 and U2. And you shouldn't think that this is some rigid wall, no? This is, to all effects, a collisionless wall. So you can have flow of material over the wall. You can have a flow of momentum and energy over the wall. It's just that you have to balance these kind of microscopic quantities while you go through the wall. Okay, so you have to impose the so-called jump conditions. For instance, you have the fact that you must conserve mass. And if you compute this conservation of mass in the shock rest frame, what you have to do is just to integrate this equation over the shock. And that's easy to see that this just gives you that the product of row one divided by V1 is equal to row two, sorry, row one times V1 is equal to row two times V2. So if we are talking about a denser environment which just follows up a poor intergalactic medium that is invested by this shock. So what is going to be going on is that there is some compression ratio, row two divided by row one, which is reflected like a ratio V1 divided by V2. And then what you can compute imposing that energy and momentum is conserved. You can write explicitly what this ratio is in terms of an equation of state for the fluid. And in particular you get that this is equal to gamma plus one divided by gamma till the minus one plus twice over Mach number of the fluid one squared. What is this gamma? Gamma is just in the equation of state for this fluid is P2 equal to some constant row to the gamma tilde. So for instance gamma tilde is five divided by three for a monatomic gas. So this Mach number is just the ratio between fluid velocity divided by the speed of sound in that system. And in general strong shocks are the one in which this Mach number is much larger than one. Strong shock. So you see then in this limit you go to just gamma plus one divided by gamma minus one. And if I take it for a monatomic gas, so if I take it for gamma equal to five third, then this is just eight divided by two. So this is four. So you go to a state in which you have a constant compression factor of four. So as I told you this is something that has to do with hydrodynamics but mostly with magnetic fields. And then the game is the following that you have a picture like the one I was discussing yesterday for the galaxy. You have a picture here with regular plastic magnetic field and a picture here by with regular and stochastic magnetic field. So you have pitch angle diffusion going on here, pitch angle diffusion going on here. And then what we want to see is, since as I told you this is collisionless shock, so particles can go from one side to the other. What I won't compute is whether when I go with the given particle from the upstream to the downstream and then somehow come up to the upstream. What has happened to the energy of the particle? So in the rest frame of the shock. So I'm somewhere where I put shock, some origin in this perpendicular direction to the shock wall. I have now, suppose that my, sorry here I used V and U at the same time. This is what I later called V. So suppose I sit in upstream in some environment which is quiet. So I just set in the lab frame this to be zero. So in the shock frame I have some velocity V1 which is just minus Vs. And I have some velocity here V2 which is because of this match condition is oriented in the same way. And I draw it smaller because it's just V1 divided by this compression ratio. Okay, so at the end of the day I should draw this factor of four with respect to this one. Okay, so then I'm taking one particle here which is energetic, so it's upstream downstream. It's energetic, so E of the order of P. And I, this is going, doing my random motion here. And then it happens that it crosses the shock. Okay, crosses the shocks and ends up in the rest frame for this other side of the shock. So I have to take E and make a large transformation in the downstream. And this is just one, this is just one plus beta mu. Well mu is the pitch angle with respect to the, so this is the cosine of that angle is mu. It's just the pitch angle with respect to this velocity. So this will work just if I'm below 90 degrees. So for mu that is between zero and one. And beta is this V1 minus V2 divided by C and gamma is a coordinate factor. Okay, so then the particle got over here. Got over here, it does the diffusion. Okay, and then eventually it comes back in upstream. So I close the cycle. I go from one to two and from two to one. What happens when I cross back? So this is a magnetic mirror, right? So this is an environment in which there is no acceleration of the particle. So the energy at the crossing back is just equal to energy at the crossing in. So what I have to compute is just how that is reshuffled by the extra Lorentz transformation that I have to do on it. So I take ED and boost it back. So it enters with some pitch angle. So let's suppose it's like this. It enters with some pitch angle which is now corresponding to this cosine of theta prime. So I have a prime here. Beta is just minus beta. Okay, and forth gamma is the same. And here it has to cross back. So angles have to be from 90 degrees to 180. So I need to have a new prime that is between minus one and zero. Okay, so then if I just relate the energy of the particle before starting the cycle and at the end of the cycle, that's E minus, sorry, E upstream minus E divided by E. That is what is just, I insert that formula over there. I get gamma squared one plus mu beta one minus mu prime beta minus one. So then what I have to do is to make an average of this gain or lose. I still don't know if the particles lose gain or lost energy. That would be the average of this E, U minus E divided by E over mu and mu prime. And then I just have to set the probability for having mu and the probability for having new prime. So this probability are just a probability in crossing a wall. So they just scale with the pitch angle, mu, and here is with minus mu. So if I normalize them correctly, I actually get the factor of two. So the real thing for instance with this one is that this is twice mu and then theta of mu minus one theta of mu. So this is just every side function. And now goes E for this one. The probability here is minus two mu prime theta of minus mu prime theta of mu prime plus one. So what I have to do is to fold up this probability and integrate over mu and mu prime this expression here. We have an integral in d mu prime between minus one and zero of two mu prime. An integral in d mu between zero and one of two mu, which multiplies these stuff. Gamma squared one plus beta mu one minus beta mu prime minus one. Okay. This integral is trivial to do since time is short. I just give you the answer. This psi, this efficiency in conversion in the limit for beta much smaller than one is just equal to three over four beta. So you see there is always an energy gain. Okay. Because this beta is V one minus V two and we said V one is four times V two. So all the time you complete the cycle from one to two to one, you are giving a relatively energy gain of this side. Okay. So then what we have to estimate is just what is the probability of coming back, right? And this probability of coming back is what is the probability of having a wall crossing and relatively to the probability of not escaping from, sorry, it's the probability of not escaping from closing this loop over the overall wall crossing probability. The wall crossing probability is again a projection of some isotropic say density of cosmic rays here. Cosmic rays are isotopized. So they have a flux which they are realistic particles. It's like that. And what you have to do is just to project out these onto the wall. So when you do this, you get, okay, and then you are also five variable. So you get the two pi from here and the factor of one half from the integral of this between zero and one. So you get the disease and C divided by four. The probability of escaping. So the one that don't come back are just the one which are drifted away with by these small bulk velocity V2. So the drifted away ones are just n times this velocity V2. So if I compute the escape probability, which is the one that I lose divided by the one that we're crossing, it's just n V2 divided by nC divided by four. So it's just four times V2 divided by C. So then you see you are in a condition that looks like this. You have found out a engine that is taking you some particle with some initial energy in hot. And in one cycle is transferring to this is boosting this E naught by a fraction E minus E naught divided by E naught, which is this side efficiency. And the probability of completing this cycle is one minus the escape probability that I computed. So in particular, this is a simple case because you see this gain in energy and this probability of completing the cycle are not depending on the energy of the particle itself. So it's the standard ruler for stochastic acceleration. So you have that at each cycle you have n plus one energy, which is E n plus Xi E n. And so E of n plus one is just one plus Xi to the n times the initial energy E naught. So if you have to go up to energy E n that is done in n cycles, which are just equal to the log of E n divided by E naught divided by the log of one plus Xi. What is the fraction of particles in your environment which have an energy that is larger than E n? That is computed by taking those particles that reach the E n and then applying this probability that they stay inside the system. So that's just equal to the sum between some index m equal to n to infinity. Suppose that this is an acceleration that goes on forever of this one minus p escape to the power m. So this is one minus p escape to the n sum between m equal to zero and to infinity of one minus p escape, which is a sum of a series which we've argument less than one. So this is just one divided by p escape. So this is this piece, one minus p escape to the n divided by p escape. What is one minus p escape to the n? One minus p escape to the n is n is that is one minus p escape to the log of E n divided by E naught log of one plus Xi. Then I can use the identity that some number elevated to the log or some other number is equal to the other number elevated to the log of the first number and exchange this with that. So I get E naught E n divided by E naught to a power minus let's call it beta. So I get a power law in E n with this beta being equal to what I read from there is minus the log of one minus p escape divided by the log of one plus Xi. Did I get it right? Yes, I even got it right. So this we can expand in the limit for p escape, which is much smaller than one and Xi, which is much smaller than one, which is the limit that I'm talking about here. So V1 and V2 are smaller than the speed of light. So that's okay. To first order, this is just p escape. I get a minus p escape minus here and then in the denominator just get Xi. So then there is some magic here because if indeed I was talking about before a stochastic process in which the energy gain and escape probability are not depending on energy, I find at the end of the day I integrated the spectrum of particles above some given energy. So a differential flux in energy that goes like this E to the minus beta minus one. So this is integrated in energy. So if I just have a spectrum that's, so this is the one I was calling E to the minus alpha before. Okay, I'm getting power loss as roughly I have in experiments. And then if indeed this is the engine which gives the power loss that after propagation I measure in cosmic rays, we can estimate what kind of power loss we get. The power loss we get is this alpha which is p escape divided by Xi plus one. So it's four B two divided by C divided by four over three. Okay, so there's, yeah, that's fine. It's V one minus V two divided by C. Okay, so this is plus one. So this is three divided by V one over V two minus one plus one. And then you remember we can go to the limit of strong shocks where this is going to the compression factor R and for an atomic gas this compression factor was four. So at the end of the day this goes to one and the full thing goes to two. So in this rigid scheme I suppose to have sources which have a power low like energy to the power minus two. If you remember what I told you about the proton flux, the proton flux after propagation had a scaling that was of the order of energy to the power minus 2.7. Okay, roughly. There were some discrepancy on top of that. And these we said is something which was related to the source power times the time of diffusion. So this is something that was scaling like e to the minus alpha minus delta. So now with B over C giving delta of order of 0.5, then the alpha I need from observation is not too far from these two. It's 2.1, 2.2 maybe. Which is something that probably has to do with the fact that this game of accelerating particle is not really working with a totally energy independent gain and totally energy independent escape state probability. So you can tilt a little bit the spectrum and just match what we serve. So this guy up there is really a target to identify galactic cosmic rays. The sources. And what is nice is that even from the energetic point of view, things turn out to be okay. So I can give you some rough power budget for supernova remnants. So versus cosmic rays. So what's the power budget in cosmic rays? The cosmic rays have an energy density that's around 0.5 electron volts per centimeter cube. They stay in a confinement volume that's this rough cylinder I wrote, I sketched before, which has a volume that is this area of the disk times the height of the volume. So you plug in numbers, you get the order of 8 in 10 to the 67 centimeter cube. And they stay in this volume for a confinement time. I take for instance the diffusion time, which is of the order of 10 to the 7 years. So if I want to measure the luminosity in cosmic rays that I need to have, these are this raw cosmic rays times the volume of confinement divided by the time scale for confinement. And the number you get out of this is the order of 2 in 10 to the 41 ergs per second. Would you have done any astro course in high energy astrophysics sources that will ring a bell? Because now supernova are typically releasing each an energy that's in a, sorry, a kinetic each, which is of the order of 10 to the 51 ergs. And they do it in the galaxy sort of at a rate which is 2 to 3 times per century. That's the average rate. So if I just compute luminosity in a kinetic energy that's this 10 to the 15 ergs divided by this rate of 2 to 3 times, sorry, it's 100 years and then there are say 2.5 here. So the number you get out there is of the order of 8 in 10 to the 41 ergs per second. So you see these 2 numbers, the luminosity that I need an injection of cosmic rays, and the luminosity I have a disposal in supernova remnants in kinetic energy, then roughly 20% of that is going into accelerating cosmic rays. I have the source of cosmic rays. The only, the last thing which is beautiful here is this number here. Because now what you can do is to try to estimate as a source of turbulence these same objects. And so these objects are not only accelerating the environment, they are perturbating the environment in such a way that they are imprinting stochastic alpha and waves in the environment as these energies flowing out from the shell. And okay, there are various ways of computing what's the power spectrum for these injected alpha and waves and what is beautiful is the fact that you have theories in which you are indeed matching a power spectrum which gives these scaling of the stochastic wave power spectrum and then in turn one over the spectrum of the diffusion coefficient. So the game at least for galactic cosmic rays is sort of closing up. As I told you there are main discrepancy like for the positron flux. The positron flux is not accelerated in these objects, not likely. I mean there are some models in which we cook it up but it's a little bit stretched. So you need some kind of extra sources. I told you probably these extra other sources are pulsar. We see pulsar in lots of them in gamma rays so it's conceivable that that works as well. The good news is that I sketched the level of sketch on the blackboard. It works very nicely. The good news is that when you go in detail at checking what's going on, it doesn't work. It's a good news in the sense that there is still a lot of work to be done. And I think it's also a fun field because it's a field in which data are now coming up continuously. So you have these breaks in the proton flux. That's not something I'm explaining in this context so I have to cook up something else. I have to understand the puzzle of extragalactic cosmic rays. So this acceleration process, you can make it up. The efficiency for these cycles cannot be up to infinity in large energy, right? Because these are magnetic mirrors. You are going to eat the larval radius at which if the larval radius is larger than the size of the accelerator, the thing cannot work. And then you have an estimate of what is the magnetic field glued to that stuff. You can translate it into a larval radius and back into an energy. You find energies that are of the order of 10 to the 15 electron volts. For sure, supernova remnants are not accelerating the 10 to the 20 electron volts cosmic rays we are seeing at the end of the energy scale. So there's going to be other beasts around in the universe which do that. Gamma Ray Burst, Arctic Galactic Nuclear, who knows? We still don't know in detail. So I think that the main patches of the framework are there. The details is for you to fill in. I feel sure.