 Okay, hello everybody. I think we're ready to start the next webinar. My name is Joel Jones from the PUCP in Peru, and I will be today's host for the Latin American webinar on physics. We're having our 20th webinar this time, and we today we're having a very interesting talk by Leonidas Aliaga, who's going to talk about the calculation of the NUMI flux. Now before we start, let me remind you, let me change this, that you can ask questions right here, right? So you can go to the Google Q&A, which should be somewhere around like here. Yeah, you can see it right here. Yeah, and you can also send us your questions via Twitter, right? Which should be like around, let's see if I can get this right. Okay, low physics, right? We also have a new web page, which you can see down here, right? lowphysics.wordpress.com, and there you can get all of the previous webinars and the current info, right? So let's get back to normal. Here we go. Okay, so I think it is time to start. Let me present Leo. He is a former master's student here in the PUCP University, and he ended up doing the PhD at the College of William & Mary in Virginia. He is actually one month away from presenting his PhD thesis, and after this presentation, he shall move on to a post-op position in Fermilab, right? And the title of his talk is Calculating the NUMI Flux. So I will leave you with Leo for now, all right? So let me see if I do this properly. Okay, Leo, they are all yours. Let me unmute you. Give me a second. Okay, you're unmuted, Leo. You're ready to start. Great. Can you hear me? Can you see my presentation? Yes, everything is perfect. Well, to start now, thank you all and thanks everybody at webinar. Thank you for having me and inviting. It's great this effort. Well, my talk today is about calculating the NUMI Flux. It's a big part of my thesis that I'm going to have in two weeks. This talk is supposed to be long, it's playing many details, etc. This is the same talk that I gave to Fermilab. This is why I chose Fermilab three months ago. I removed many things to the back-up slides, and if you are interested in me to show something that I mentioned, etc., we can go to the back-up slide. There is time, right? Well, first we should ask, why is predicting the NUMI Flux important? In the neutrino-selection strategy, I don't want to go through every detail about the neutrino-selection strategy just to emphasize that basically we had two separated detectors sharing the same neutrino beam. The detector can be constructed in the... can similar each other or not similar. This can be in the same axis of the neutrino beam or can be off axis, etc. But at the end, the idea is that every neutrino, sorry, every detector, one of them very close to the production point of the neutrino that we used to call in our jargon near the detector, and the other one sometimes a few kilometers and sometimes hundreds of kilometers away from the production point of the neutrino beam that we used to call far detector. At the end that we see in every detector basically events. We count neutrino reconstructed and we do our best. The idea is that relating the events that we see there and other parameters involved like the acceptance and how well we know the flux that is in every... in each detector in the cross-section, etc. We can get our oscillation probability in the formula that you see there. Well, it's just basically a rough approximation. That assumes that for instance our detector are the same or we can cancel the cross-section, etc. But there are problems here. Not no problem. I mean the challenge is here. First one, the flux just partially cancels. I'm going to mention that in the next slide. And the detector is slightly different. Even slightly different, the cross-section don't necessarily cancel. If you go... Well, slide number four. I stole this plot from Minos. For instance, for Minos, the flux. When they... they just need to apply the ratio of the flux that they see at the far detector, the value of the ratio that they see at the near detector. They can use a simulation for that. The ratio from the simulation you can see there in the empty boxes with the best value as a dot there. But when they make an effort to tune the Monte Carlo that they use, in order to... With some technique that they have, when they use different configurations of NUMI, etc., there's another topic that I can point into one. But the idea is that they basically get different ratio. That means that it is... It really matters if we understand how the flux is produced in terms of the hydrons that are in the hydronic cascade that lead to neutrino that pass through Minos in this case. I mean, that matters. As you can see there, I mean, it's not trivial. The first one, to start with, the uncertainty is reduced for a lot, but also the central value of the ratio change. And that will change our conclusion with respect to what they are trying to deliver in terms of the simulation parameters, etc. Slide number five. The other side is, for instance, the importance of the flux to the neutrino cross-section. Particularly in the region between 100 MeB and 20 GB is an important region now because many experiments are running at that energy range. Minerable, for instance, my experiment is running at... With the PICAT 3 GB and 6 GB now, and NUMI is running at that with the current configuration. NOVA, that is some of AXIS experiments run at 2 GB, it has the PICAT 2 GB, for instance, the neutrino flux. That's a very important region. But exactly that region is, how should I say this, a total mess in terms of how we know the cross-section. As you can see there, I took these plots for neutrino and anti-neutrino, the left and right side is the total cross-section, but also divided in terms of different channels. It's exactly the region that this few GB, that's going from the quasi-elastic that we know relatively well, to the deep-elastic with a big contribution from some resonant production. And you can see that even there are some data that disagree each other. And this is important the cross-section basically helps us to understand what is the use and understand what is happening inside of the nucleus, for instance, and when the final particle is ejected from the nucleus. And that's important, for instance, there is a common experiment called DOOM. That's also a relation experience. A big project is that we consist of liquid argon. It's a complicated nucleus and we need to understand now for that future experiment. We always wonder if those disagreements that they have in this cross-section measurement come from a different understanding of what they are made sure of. Some experiment perhaps is not accurate or perhaps the way that they understand the flux. And the reason is the following. It's like number six, for instance, this experiment I am working on. It's a few words even when this work is not related only to Minerva. But Minerva particularly is a cross-section experiment, neutrino cross-section experiment, covered exactly that region with a high statistic of neutrino and anti-neutrino flux. We measure inclusive channel, but also every exclusive channel that you see in the previous page, we place different nuclear targets in the upstream part of our detector. That means that we are having a dependence of our cross-section measurements. But roughly speaking, the formula that we use at the end by simplifying a lot of this is that it's in the box. The cross-section is basically the number of events that we see and reconstruct and unfold in our detector divided over the flux, the number of neutrinos that pass through the detector, and multiply by the number of targets, etc. That means that every uncertainty that goes into the flux goes directly to the cross-section uncertainty. For instance, one recent, well, two years old Minerva paper, it was a particular channel coherent charge biome production. You can see that you are interested in reading the citation. It has this problem, for instance, about this uncertainty that comes from the flux. For instance, the rise in the left side, you can see the absolute cross-section that we mentioned in terms of the neutrino energy. Our results are interesting, but if you see in the right side, this is the uncertainty, the systematic uncertainty in cross-section units. The remaining uncertainty is there, but I want to call you a day so about the flux uncertainty. The flux is very big. Practically, it's an uncertainty dominated by flux. If we can reduce that, or we can understand, or we can do something about that, that's going to impact directly to the data that we release. That's why this is so important for Minerva, the flux, but not just for Minerva. I just mentioned that oscillation experiments also need to understand the flux, because the ratio far over near-detector can change. In general, there are many experiments that share the NUMI beam line. There are experiments on axes like Mino, Minerva, there are all experiments that run before Agonio Pina, etc. There are off-axis experiments like NOVA, and even booster experiments. We have another beam line that comes from the booster in Fermilab at AGB, a NUMICON from 120. They receive a small amount, but can be significant for many reasons, kind of off-axis NUMI beam. The idea is that in some moment we think that we thought that if we can handle this problem and get a flux determination, that can be shared with other experiments as well. But the flux is important, okay, but why is it so hard to estimate the flux? A few slides about that. Well, the first slide is about how to make a conventional NUMI beam. This is a simplified sketch that I draw there. The idea is that use a short-lived particle like meson and scales to make NUMI. We need basically some components for that. A very intense proton beam collided with a long target. It used to be a long target. In NUMI we use 120 GB proton that we extract from the main injector and we collide in a graphite target that is about two interaction lengths long. We expect that after these collisions that come from the primary, but secondary is also interacting. I mean proton producing the from the primary interaction, but also pions, kions, reinteracting, etc. Some of the particles, especially pions and kions, are going to lead the target. We place a focusing system, but in the sketch is just one magnetic horn, but we use two for instance. Some of the experiments like medium for instance use one, T2K use three magnetic horn. The idea is that we want to enhance the mesons that we want to collect and redirect cancelling basically the transverse momentum to make the decay favoring the direction of the beam line and have more neutrinos and more efficient efficiently. We used to plane an extended decay region. We call decay pipe in our terms. It used to be vacuum in NUMI. Currently it's filled by helium and we place some absorbers. That's the idea to make a conventional entropy that works in NUMI and the booster experiments and T2K, etc. Particularly NUMI has been running since 2005, what has been constructed before. From 2005 to 2012 running this low energy mode that produced the flux spectrum for mu neutrino that you can see in the blue line in the plot there at the left side. Currently it's running in this medium energy that we have a more intense neutrino beam peak at around 6 GB. That's what we have in NUMI. In this talk I'm going to be focusing on the mu neutrino in low energy just for matter of a better explanation. Some details about how is our NUMI target. We have as I mentioned before, it's a graphite. It's made of graphite. It's a rectangular graphite rod. It's segmenting things. We have around two interaction length long, less than one meter in the least low energy era that we run in NUMI. In minimum it's a little bit longer. The other problem here comes from the following. We have basically after the first interaction a hadronic cascade in the target. It's a busy place after the protons are collided and hit the target. Bions, keons and other secondary protons and any other particles are created in cascade in the hadonic interactions. The problem is the following. For the energy range 120 GB, basically this interaction is not perturbed to QCD. We cannot and also there are many degrees of freedom related to this the NUMI geometry, etc. The simulation needs to use a model, but we need to be sure that the model reflects to what is happening in Asia. You can see for instance when I show the faro-bernia ratio from Minos that when they constrain the model in order to match why they believe that it's the real Hadron that is produced out of the target, the faro-bernia ratio change. That means that the model can be run for a lot of percenters. I'm going to see next slide, slide number 15. For instance, when we plot the low energy flux in terms of three different Hadron models, red and blue comes from Jan 4, black is Flucca. The difference in the peak can be plus, minus. I mean the spread can be even 20 percent. It's a lot. And that's something that we need to address. What happened in terms of the Hadron production that is basically the main focus of the work that I was doing here? The main contribution to the neutrino flux that we have comes from pions, at least for low energy. And those pions are created in primary, secondary, and tertiary interactions, but basically many of them are created from the primary proton interacting in carbon. And that's many of them escape the target. Some of them not. I mean the image is more complicated, but the idea is that after they leave the target, they are focused, as I mentioned before. The kinematic region of those pions that are focused and create neutrinos that pass through minerva, minos, and all of these experiments is the plot that is showing in slide number 16. As you can see, the region that we were interested in terms of the focusing peak particularly is few Feynman x values, no more than 0.1. Feynman x is defined as the longitudinal momentum, the fractional momentum seen in the center of mass. I guess that using that, we are going to have some scaling behavior of the invariant cross-section. I'm going to mention that in some slides ahead. But that's the region that we're interested, basically few pt, but no more than 0.6 GeB, or C, and no more than 0.1 Feynman x. And that creates neutrinos. In slide number 17, you can see for instance, I'm going to skip the right side, but the left side, so for instance, the neutrino components in terms of their parent identity. Most of the neutrinos, as I mentioned before, come from pi on plus. I mean, basically the interaction that we see in the previous, sorry, the kinematic region that we see in the previous slide, but other regions and other interactions, et cetera, et cetera, that come to move to the back slides. It is interesting that after 20, 25 GeB, K1 parent become dominant. I mean, the new neutrinos come basically from K1. In general, in Minerva, in other experiments, we are interested to constrain not just the pions in the peak, we are interested to constrain the whole region. The whole region go from 0 to 120. The other component, important component in NUMI is the focusing. The focusing, as I mentioned before, is composed by two magnetic horns. This is a place here, there, a transverse view. The way that we create this magnetic field that basically focus the pions with that particular chart that we want and de-focus the other charts, the other, the opposite charts, pions, okay, mesons, et cetera, is created in this way. Basically, these are, we pulse 200 kilo amps, the current to the two aluminum horns in the direction that you see there, in order to create a toroidal magnetic field. These conductors, when they are made of aluminum, it's not so much, it's just a few millimeters thick. But the idea is that every particle that leaves the target, you're going to feel that magnetic field and change. After that, they come with the pipe. In terms of the uncertainties, about all of this process related to the focusing, we make many studies about that. The plot is shown in the slide number 19. It's relatively small because it's small around the focusing peak that is about 3 GB, as I mentioned before, it's around 2% of uncertainty. Most of the uncertainties come from that falling edge of the focusing peak. This is, I want to mention something because this is important. We don't mention this enough. This is by construction. I mean, when NUMI was constructed, the idea was that have a very small uncertainty around the focusing peak for that time for this oscillation studies, right, nutrient oscillation studies. It is a big effort for the nutrient-oving group that study from late 90s how to do that in order to make an efficient being with a small uncertainty around the focusing peak. What is the challenge here? I mean, I mentioned that determining the flux is important. What is the challenge? There are two challenges. There are two leading causes of bulk flux prediction. One of them, as I mentioned before, comes from the focusing system. We study what is the uncertainty associated, but we always try to update and see if the way that we are simulating and we are understanding the focusing is better. It's basically a geometrical problem in general. We see that for the way that the intense interaction that is inside of the target, the size of the magnetic field, the position of the target with respect to the horns and the relative position between the two horns that we have, any small mismodeling in the geometry of NUMI can lead a few percentage discrepancy in the flux with respect to what is happening. That's important. Perhaps in the oscillation experiment, those mismodeling could cancel, but for us that we want to measure cross sections, we don't have two detectors. We just have one detector. In those uncertainties, we have five percent, four percent that come from small millimeters misalignment or something that go directly into our uncertainty that we have basically in the center value of the cross section that we want to deliver. Some of the common examples I have in backups like how sensitive this is to have a precise geometry, especially around the relation between target horn and between horn. But the other big component that's going to talk about the rest of this talk is about the hydraulic interaction. I mentioned that we need a model and the model can disagree about 20 percent in the focusing picking even more. We need external data to constrain the production. That's basically that we are new in order to know our flux. We need basically two quantities. We need to correct the interaction probability of the hadrons in any material. Given the interaction happened, correct the probability that a hadron is produced in the right kinematic beam. For the first one, we need basically the cross section. For the second one, we need the total cross section. For the second one, we need the differential cross section. What sort of data is available in order to correct the hadron production that is our biggest concern now? We make a survey and we find basically two kinds of experiments that can help us. There are three target experiments that release basically cross sections, differential cross sections, gears, etc. It also takes target experiments that basically collide protons on a long target and they use to measure yields. This is the data relevant that we found in order to correct our hadron production in our beam line. From thin target experiments, there are many experiments. For instance, all the experiments in Bellettini, Denisov, some of them as Fermilas, Stern, and Russia, etc. We take advantage of all of these data from the inelastic cross section. But there are some recent data that help us a lot now. One of them is data released from NE49. NE49, NE61, and all of these experiments are hadron production experiments trying to test basically what we want plasma and how is the hadronization, etc. But when they measure some data like proton and carbon, that's exactly what we have. And we can use the data with 150 AGV that is the incident proton energy that they use. It's very close to what we have in NUMI, it's 120. And we have all the sets of data that we use and all the experiments from Fermilas, Corbarton, etc. But the idea is that we want to incorporate all of these, the data that we know, in order to correct the hadron production. Recently, we have some interesting data that come from NIMP experiments, main jet to particle production and Fermilas, that mentioned exactly, I mean, they basically took a spare target, one of the target that was working for NUMI, and they mentioned how is the hadron production for this target. That's very important data that we have also available. Here it is. Well, in terms of how bad is our, it's a simulation, I mean, not just a simulation, it's the prediction from the model respect to real data. I have three slides about that. For instance, in terms of the inelastic cross-section, there are some details about inelastic absorption. Here I'm not going to go there. But the idea is that you can see that in in terms of the model, the model can disagree completely. For instance, all of these data that we place there for inelastic cross-section of proton and also some data from neutron interacting in carbon, basically predict that we have a flat distribution, flat in terms of energy distribution of inelastic cross-section. The model does not say that. We correct that. That's my point. In terms of the of the production of pions, for instance, any 49 measure pion production from proton carbon at 158 GeV, as I mentioned before. This is a viscous light. We are going to represent the following. Let's see. The x-axis is the Feynman x variable, as you remember. The region that we interested is between 0 and 0.1 Feynman x. The vertical axis is a transverse momentum that is our region of interest from 0 to 0.6 for the peak, but the whole region contribute to the flux. We place there some lines in order to show this. This cartoon show basically the inner one is the place that we have more pions. I mean, we're trying to place in the same plot something that represents that you saw before. I mean, when is the region that we interested? The peak of the region is exactly at 0.2 transverse momentum and 0.5. I forgot much. I think contains the first one we're incrementing like the first curve is like 20%, 30%, 50%, etc. But the idea is that you can relate that with the previous slide. I forget the numbers here. In the top of the plot, I place also some approximate value because this pion, for instance, the case of pion decay is just a two-valid decay and we can have an approximate energy of the neutrino related to the pion pattern. And then you can see that, for instance, how every region contribute to different neutrino energies. But the idea is that any 49, for instance, give us a good data coverage. It's great. We are using that data to correct our flux. The other data that come up recently, 2014, is need as I mentioned before. They measure exactly the NUMI target and the coverage around the focusing peak. I put this in the plot that you can see there. It's good. These are measuring basically the x-axis is the longitudinal momentum and the y-axis is the ratio. Every line there is the ratio between the data that they measure over our Monte Carlo. Our model say that is the production in the target. And they add one in order to separate the lines. But you can see there that the coverage is good. What is our strategy to accumulate the flux? I'm going to simplify this, basically saying that the idea is to implement all of this hard on data. We have even more because we have data that confront in target. Many of them, I mentioned, I just described any 49. We have more than any 49. And we have this target that already contains the pie on the leaf or the particle that is measured leaving the target. Also, it's a convolution of every interaction that's happening in the target. I mean, we have kind of more information that we need. But the idea that our correction works is basically we correct the neutrino yield that we see in Minerva or any other experiment. I mean, my point here is that we are not ready to correct the model itself. We are not going to correct the right neutrino yield that we want to see in any point in the NUMI in Minerva, for instance. And then we basically see what is the history of neutrino in terms of the interaction that happened in the Hadroni cascade to produce a neutrino and correct that interaction. That's our procedure. We applied two corrections. The first correction is correcting basically the right beam attenuation. I mean, basically the right probability that the particle passed through a material without interacting or passed through a material without interacting and interacting in a particular point. We correct that. And that depends on the total cross-section that we applied that. And we have data for that. I just mentioned a few slides before. The other correction that we applied that is also important is that for thin target data, for instance, we correct the production in terms of the, they released the data in terms of invariant double differential cross-sections. They released a 150 AGME supposed to, according to Feynman in his paper, he said that using Feynman X, you should scale between different energies, right? And the invariant cross-section should be independent of the energy, like Jordan's, but he never explained why, but there's some violation related to that. And we account to that doing some fluke accounting for that violation of the Feynman scaling. And we check that because we have more data. We have an existing one that measures exactly the same. Bio production from proton cargo and 13G and then we basically correct. We see that the scaling is working, right? Because they have a 150 AGME incident proton in 31G and we are basically in the middle, right? From thick target data, we correct the yields. But when we don't have data, we have to do something, right? The idea is that we were conducting a long study in my experiment, how we can scale this proton cargo interaction, for instance, 149, to order materials, looking basically how this cross-section, variant cross-section, scale when we change material. It's not that easy as just counting the number of nucleons that is in nuclei. And we basically find an uncertainty associated when we extend the cover as to all the materials. And we use theoretical guidance. We use everything because at the end we want to reduce the situation in which we don't have any data or any clue to apply. In the past, we used something that is very risky. But this we only see that we have. It's just basically take all of the models that we have, predict the flux in different parts of numy and find the spread. But we don't know exactly if the real nature is inside of this spread between models. And also, we don't know what is hard to find the internal correlation between these models that can be overestimated or underestimated. The second option that we're following and we're hearing a lot these years is basically have our best guess but anchor to data if at all possible. That means that looking the agreement with all the data sets, we find, for instance, that our model, the model we use, for instance, the discrepancy in all of these data sets, respect to the other data sets, when we have data, sorry, for example, the discrepancy is normal at 40 percent. And we do that now. Results about the time. Oh, I have to start finishing. Well, as I mentioned before, we have, in hours away, we have thin target and thick target data. And this gives us the opportunity to have, like, two flux prediction. For instance, what happens if we correct the model using just thin target data? Or we can use, we can correct the model using thick target data, for instance, just taking the mid data and use thin target in all other places that is not covered by need. That's what we're doing now. And that's our work that we have the last year. Try to understand because if we have two flux predictions, canceling or correlation between them, they should agree, right? They don't agree. And that was probably, I mentioned this in the backup slide, but basically we found that thin target data, when we use a different procedure, agree, agree, agree, basically, with the other procedure that I mentioned, I was going to mention a couple slides, and no thick target data. That's an interesting topic because we learn a lot about what, in terms of Hadron production, right? That's, I moved this to the backup slide if you are interested. In terms of interaction covered, for instance, for thin target data, that's happened in the number 36. I plot there the neutrino energy in the x-axis, and the y-axis I am plotting here, there are a number of Hadronic interactions per new neutrino that pass through Minerva. The total is showing in the black line right there. It's about 2.2 for very low neutrino energy, but it's 1.4, almost flat after that. Any other line represent a different kind of data or procedure that wouldn't fall into handle those interactions. For instance, the solid blue line and the dashed line, dashed blue line, represent, for instance, the average number of those interactions handled by pions or keom production from proton carbon, basically, Na49, and other data that we use to extend, very respectively. Other dataset handle other subdominant components like, for instance, a nuclear production of proton carbon that we also have data, I haven't mentioned that, but there are many components there, but there are some places that we don't have data, and we are either extending the data coverage by using theoretical inputs or we don't have any data at all. And we use, as I mentioned before, conservative 40% there. Is this dashed brown line that we call nuclear A, and the legend can be misleading. Is any nuclear interacting producing any particle in the neutrino history there? I mean the contributing that is not covered by data. Some of them are covered by this extension from proton carbon to other materials, to proton in other materials. But others are not basically covered by any data, and we just have our best guess, conservative, but our best guess anchoring data, as I mentioned before. That is the place that we need data for it, and that can help us reduce. When you see the fractional uncertainty associated to every line, you're going to understand why it's so important to have the data there, and how this work can help to ask for these kinds of measurements in the future hydro production experiments. In terms of the material traverse, as I said transverse, sub traverse, right? This is the average material traverse in terms of mole over centimeter square. That's something that we are counting also. The main contribution, as you can see there, comes from this digital parent the idea is for the angle, because the pion producing the targets have the boost of the primary proton beam, and also for the smokey, and they leave the target in the very small, and they can leave up to 10 centimeters. That's a lot, because the aluminum has, I forgot, but between 40 and 50 centimeter interaction length. That means that the enemy's model of the absorption with electric cross-section won't matter. That's the results in the number 38 that we have. We basically, using all of this data, I forgot to mention that also we study how to correlate the data, the systematics, the beam-to-beam systematic errors that we have, statistical errors, etc. We got these results in the number 38. Basically, this is for low energy, a nutrient spectrum at Minerva with small error bars, and well-known error bars for us. In the number 39 correspond to the uncertainties associated to this calculation. The total uncertainty is not bad. It can be better. It's around the focusing peak. It's seven percent between six and eight, seven percent. It's good for our measurement. That's going to improve a lot. For instance, the coherent charge biome production. Another publication that we have in Minerva, I have the list in the backup slide, because they were published before this work. We need to update there and want to review themselves. That's going to be crucial for our cross-section measurements. You can see there the different contributions, but I want to emphasize here again, this brown dashed line that represent places when we are applying indirect data or when there is no data at all. That's the need for us in general, for the internal community, from hardware production experiments, to measure that kind of data. For instance, proton interacting in aluminum and see how is the production from those interactions, or all other interactions that we call quasi-elastic, we don't have data. Basically, we have the old reference from the sixties, if I remember correctly. The other thing that we are doing is trying to get the flux for other means. There is a long new technique that I'm not going to explain deep. I'm just going to mention one thing. We can express the differential, the cross-section, differential cross-section in terms of the new, this variable that is basically the hydronic recall energy. The energy can be the energy transfer in the interactions. That kind of differential cross-section can be parametrized. There are some papers there, etc. about this. In terms of ABC, that basically contains the transfer functions, it's integrated transfer functions, and some new, this variable, the hydronic recall energy, divided over AE, the neutrino energy. If we have this new over E small, that means new small, we can get a constant differential cross-section. If we have a constant cross-section, and we measure that in Minerva, we are going to have, using the formula before, the shape of the flux. If we anchor this to a high energy measure of the cross-section for instant nomad, that measure of high energy, the cross-section in carbon, for instance, a high energy, we can basically normalize that and have another measure of the flux. This is kind of an in situ measurement. It depends on how well Minerva reconstructs the recall energy, but the hydronic recall energy is not used as a priori measurement because it depends on Minerva, but it can be used basically as a test if we are doing something correct. That's what happens in the next slide. Basically, when I took the ratio between our prediction that I mentioned before, right, using this Hadron production connection, with the low new prediction, that's the ratio you can see there in terms of the uncertainty, they agree very well. I have to mention that there are a couple of percent that the uncertainty is overestimated because we have the technology to cancel the correlation between them. These are my conclusions here. It is crucial for Minerva in all the experiments, as I mentioned before, to have a precise measurement of the flux with small uncertainties. We create, we make a computation of the NUMI flux with reduced uncertainties in terms of what we had before, and improve also our error budget accounting. We can't any error, any possible error that we can have now. The important thing that we made this work is that we made this work thinking that to share this work with other experiments. We had, I have in the back of slide, calculation for instance, NOVA flux with an error around from 8 to 10 GEM. That's important because the current error from NOVA is 20 percent, as far as I know. That means that we can provide a well-understood flux for NOVA, for instance, with small errors. That's the extent of this work. Our procedure also can be extended to other beam lines, and the future DOOM can take advantage of this work, and also can indicate to hatron production experiments what sort of data we need. That's something that we made. There's a paper in preparation, it will really soon. This soon can be, I don't know, I will not say what means soon here, but it's going to be soon. You are interested to know more details about this work. Thank you. Thank you very much, Leo. Now is the time for the questions. Let me remind everybody that if you're watching from the Q&A, you can ask the questions directly there, and if you're following from YouTube, for instance, you can use this hashtag, low physics, not apt, but hashtag. I don't know if there's any questions from the audience here. Yeah, I have a question. Do you know why is this different between the cross-sectional, between Trino and Neuclons? I mean, why one is two or three times higher than the other? What's the beginning almost? I guess it's four or five. Is it in the very beginning of your talk? I mean, it's already, it's in the screen, sorry. You can show your screen right now. Yeah, you can see that now, right? There's another one. Oh, sorry. Now it is. Yeah, sorry. There are many, there are many, many readers there, right? And there's a big debate. The idea is that it can be many things. One of them is that different definitions that the Nuzino community are using to define their channels. There can be also that some experiments don't understand correctly the nuclear effect, for instance. And the other one can be the flux, as I mentioned before. I mean, maybe the flux is, some experiments use to understand differently the flux or not understand it completely and at the end they got disagreement, right? Any cross-section depends strongly on the flux, the flux is 20% off in some way, the cross-section of 20% off, that's the point. The extension of this work is interesting because we understand exactly our flux, we implement everything. And now, for instance, after my defense, I'm going to, we're interested to have a time to compare, for instance, with other experiments, I mean, what has NOVA, what NOVA has, MENOS, etc., and see what is, right, any source of any discrepancy and maybe that can help to solve it. Okay, thank you. So, I don't know if there's any other question, I have a couple of questions. Oh, no, hang on a second, here's one on the Q&A. So the question is, what does the new flux calculation mean for cross-section estimates and how does this compare with miniboon results? That's very interesting because I have a plug here, I'm not expert on this, but that means there's a person working in NOVA, because that's line number 16, right? In the right side, we have the, I think I got it right. Well, in the right side, I think we have the some cross-section results that we publish in NOVA, right, and you can see in the left side with these new results, the output can change. Sorry, Leo, we can't hear you properly. Could you please repeat your answer? Sorry, the answer is the following. For instance, line number 60 that you can see there, we have, I think I have correctly, I'm not sure, but the idea is that this is a measurements of chance current quasi-elastic scattering in Minerva, that was probably 2013 in the right side, right? There are some work in progress that trying to see how our previous result has to be updated in order to incorporate the new, you can see there, the difference can be big because I'm not familiar as with every of these models that we are listing there, but previously we favor some model, but updating this with more knowledge, we can favor a different model, right? That's the impact of this work, and not only is updating our previous results, but also, that's very interesting. It's something I haven't done yet, but I want to do because Minimum has an interesting flux result that they publish. No, they mention that in their paper that basically take Flucca, well, take the Minus result from this procedure that I mentioned and also incorporate some Hadron production, I think in N49. Yeah, that would be awesome to, I don't know how it's going to impact Minimum, honestly. Similar work as I'm showing here in the line number 60 can be done in order to see how the results are going to change, right? It's not just taking a ratio between the old and new flux and basically scale. It's also a new flux, the amount of background can change, the simulation, etc. That means that you have new flux, we need to re-understand how the background works. I mean, it's not that easy to just take the ratio and update our result. That's something I'll do in Minera, but that's very interesting. I have no idea, but we'll be interested to see how it can affect that to Minimum, for instance, some discrepancies that they have. Looking forward for that. Okay, thank you very much, Leo. I don't know if there's any other question. So I have one naive question. No, that's a good dangerous question. No, it's just, I was wondering about LHCF, right? I mean, would that data be also important for you at some point? I'm not familiar with LHC, depending on what energy you're going to run. Right, yeah, because it's what's it called, forward detector at LHC. It's on Atlas, I think, or CMS, I'm not really sure. But in principle, what they're trying to do is to improve the cross-sections. Okay, the aim is to focus it on cosmic rays. But I was wondering if that could also be of any use to you. I mean, I guess that in your case, you're more concerned about interactions with protons and nucleons, right? Well, the other one would be proton-proton, I guess. But I was just wondering if that could be of some use. Yeah, it's always useful. But I'm wondering if, I mean, we're interested in this particular case of any hydronic interaction up to 120 Gb, because that's the energy of NUNI. I'm not familiar with the experiment that you were mentioning, but I suppose they want to be higher energy. Yeah, I guess so. But honestly, you're right. We are interested, for instance, in proton interaction, especially, we have enough data for proton carbon, but we're interested in proton and aluminum or proton and iron, for instance. And also, there is something that a few data, there are some data about a few biome interactions. We have some measurements of heart experiment for 2G, biome incident. There are some measurements of Nx61, I think, for 100 Gb, and we have nothing in the middle. There are some people that are studying now some Nx61 that have 30 Gb. But as I mentioned before, we have data for it. We need now anything else. Okay. I'm seeing there's a new question in the Q&A by Sebastian Sanchez. So this is the following. So Jimmy seems to give a worst fit to data with the new flux. Do you know what, sorry, yes? The first part, I could not listen. Oh, sorry. So the question is, Jimmy seems to give a worst fit to data with the new flux. So do you know what could be the reason? You're talking about the slide number 60, right? Sorry? I don't know. You're talking about the slide number 60, I suppose. No. I don't know. I'm not familiarized with what is internally happening there. I mean, what is modeling there? It looked like Jimmy has this model called Relativistic Fermi Gas, right? But Nuro, that agree better, especially when we add this transverse-enhanced model, TNN, agree better. That's what I can say, but I cannot say anything else because I am not familiarized enough. Okay. Thank you. With that. Right. So I don't know if the audience has got any other questions? I have another question, but it's kind of just by, I don't know. It's in the sense, because in your talk you mentioned, for example, Flucca and now Genie, there is this, do you know which other software or codes people used to simulate the neutrino or proton-nuclear interaction to production for neutrinos? In terms of the simulation of these conventional lumines, the people used Flucca, basically. Flucca had a good agreement with, we know that with many experiments. I mean, reflect, it's very close to data. People use Jam4. We are using Jam4, for instance, for our work. And we use Flucca for other purposes, like scaling, etc., etc. For neutrino interactions, that is the other part, the people start using Genie as a standard simulation. It's very... Minerva uses Genie, for instance. I want to take advantage of this because so much related to your question and so much Sebastian asked for Genie, right? I think it's interesting to notice that it's at 60, the right side, right? It's a curious coincidence. It will be more than a coincidence, right? For instance, before, in the right side, for instance, at number 60, you have my slide, because I'm sharing the screen. Before, we have Nuro, right? That... I mean, our prediction agreed better with Nuro using Axial Mask 0.99. When Nuro improved the simulation, right, the model, including this effect that was not included before, it exactly... I mean, the agreement was not good, right? Because this is the green line, right? But when we update our flags, right, they go in the right direction to agree. That's what's interesting to mention, right? Yeah, but yes, that's what I wanted to compliment Sebastian's question. All right. Any other question? I have one last question. I was wondering if you could give any details regarding the lack of agreement between the thin and the thick data. Thank you for that question, because that was a very expensive word, isn't it? Let's see. That was... yeah. For instance, at number 44, this is the same plot that I showed before, I mean, but now in terms of the thick target. The red line, the red solid and dash line represent what is covered by thick target, in this case, me, right? And you can see that the cover, as you remember the blue line before, the cover is better around the focusing piece. We cover a lot with the target. That means that we expect that the answer is not going to reduce for a lot, right? That was great, right? The blue line, I mean, the thin target data basically in the 49 reduced because mostly all of the interaction are covered by thick target, right? That's the result for thick target. The left side, the results in terms of correction in the urban, the right side, the fractional answer. You see that we reduce a lot, right? The answer is around 5%. That itself proves that it is very important for the nutrient community to have thick target data. I mean, that would be great to have more data than that. But when we take the ratio of both, canceling all possible correlations between them, you can have the data disagree. And we study this a lot. This is the ratio, but there is been-to-be correlation that is not taken into account in this ratio. But when we study that, we see that it will be hard to reconcile assuming different been-to-be correlations to make this ratio between the flux predicted by thick target data is divided over the flux predicted by thick target data to reconcile, right? They should be around one, right? And it's not. And that motivated us to compare with another measure in Minerva that is not direct measurement of the flux. I mean, that depends on how well we reconstruct their recall energy, et cetera. At least can give us a clue about which dataset is representing more physics consistency and which dataset needs more study. Maybe we need to study more there, and that's what we're doing now. In land number 47, you have that. When you will divide, for instance, this prediction from using thick target data over the low new, we have this disagreement, right? And there, even the uncertainty is kind of 2% overestimated there. It's about 2 sigma in some points, and that's big. That's why we decided to go for the next round of Minerva papers that are going to happen soon for our revisited cross-section data that we already published using thick target data. But we are currently studying also how we can learn what we can learn from hallow production when we're trying to reconcile both predictions. But currently, we haven't found a way, and we are going to go using thick target data as a standard flux for the next Minerva paper round. I want to promise. Okay, thank you. Thank you very much. I don't know if there's any other last question. I have, I guess, two questions. Leo, one is just if you can comment more or less the time scale of the experiment to run. I mean, when Minerva is going to run and find enough statistics or whatever to really have good measurement of this cross-section in the energy of the experiment. Well, as you know, as I mentioned, Minerva was running in, NUMI has these two modes. Low energy mode, that pick up 3GB, that start running in 2005. Minerva start running in 2009, 2010. I'm missing something here, but around that, and we already have data published trying to find this here. But we already published some data. And now we are doing analysis and we're revisiting the analysis from low energy and making analysis for the medium energy. But we already published data in the cross-section in the last year. For instance, this trans-current coherent biome production. I'm trying to look for the list of papers that we already released. There is one paper that's missing there, some biome production there. But basically, that's the list of papers that were already published. Trans-current quasi-lastic interactions. For instance, this data is not being incorporated yet in the plots from the PDG that I mentioned, the PDG page that I mentioned before that shows the old data available for cross-section. Because that plot was 2012. I'm not sure if this is an updated version of that plot, but the idea is that Minerva data is happening really soon. It's really recently. And then it's currently being incorporated in, compared with other data, and incorporated also in the generators, like Gini and Muro, etc. Okay. And the last one, just fast in the sense of what is the implication that you have for the, for instance, a neutrino oscillation, since you are measuring especially muons, and there are some, trying to find some analysis with, for instance, with the sterile neutrino, something like that, but in other experiments like higher energies like ice cube. I don't know if there are some talk between experiments to figure out what could be happening. Yeah, you said in terms of the flux? Yeah, or cross-sections. I mean, with disappearing or appearance of, since Minerva has especially a very good resolution to measure fluxes of muon neutrino, for instance. So any kind of deviation with respect to the standard expectancy would be, you know, this is, no? Yeah, no. Definitely it's going to have a big impact, right? Because we are, by measuring the cross-section, we are in the region that many experiments are running the oscillation studies, we are giving results there. I suppose that will help many see understanding the nuclear effects, improving the generators that they use to simulate the neutrinos, testing models. That's not the big picture of where our data, Minerva data is being used now by the nuclear community right now, but definitely it's having a impact. Okay, so I do not see any new question. So, okay, let me, all right. Okay, so that's it for today. Thank you very much, Leo, for your fantastic webinar and for answering all of our questions so patiently. I would like to remind everybody that we're having our next webinar in about two weeks. This time it's Jerny Kamenick, who will be giving us an update on the default on resonance at 750GV. So don't forget to follow our, what's it called, our web page, our WordPress web page, and I'll see you, we'll see you soon in our next web page. So thanks for, for following.