 Thank you, Vladimir. Good afternoon, everybody. So we have something one hour. And the topic I was given was simulation of neutronics forward wants reactors and in particular in connection with exercise, I decided to present few important concepts monter-carlov method. A v zelo sem včasno, da sem tudi različen na skripti, da tega je vse vse vse vse. Zelo. Zato sem počnem in tradukčno, ko je monter-carlov method, neutrom transport, stahastik vs. determinist, continuosne, energij vs. multigrupa. In so on. Pneženje vsega v mathematical background, ki drugi neče ne rojimo, kaj doma, PDF, CDF, samplič, … … .. … … … ... ... … … ... ... ... v ngelji. Treba je to, da mora ta vse školi bilo, da je ispomotovna, aso ne roboticso vse kot moje brilega na zelo. In ni neč tukaj, da ne so bolj je izizvek. In zelo pa vse izopLet, shoči. I localaž imel, pa neč juti so bočila, tako pa neč svoja vsi, tako saj svojo ciljpusti, ki počeli vsi nazaj, ovo svoje, kako, Vse četnji istor, ko sem vse reprejvalo. To je vse več vratno, vse dokument, vse kompakt, OK, da je tako. Zelo se o Montekarlo mest. Čekaj je Alan Herbert. To vse, da je, da je, da je, da je, da je, da je, da je, da je, da je, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da, da. Vse je. in sem zelo vzoutovalo na skript, kako je vzouto monstokarlo. Zelo sem zelo vzoutovalo na skreč, zelo vzoutovalo na prof. Rebert. Monstokarlo je vzoutovalo vzoutovalo vzoutovalo vzoutovalo vzoutovalo vzoutovalo vzoutovalo vzoutovalo. Odličnво in reaktor igražu je odorivna po uništa i novodME s rumočienem vo什inke više. Mодujem pro scrubiju, daäs na stor Yarostica nižn vrde prejaznutnih nekonec, na najselj, in nekonec nič nekonec posložimo? In nekonec od reženja v pokovnih nekočevari sprošenosti pesk 다čelo. Kako je stakastik, kako je deterministik termošcija? Monter Karlova je sveckbo stakastik, a nekonec z deterministikimi metodami. skrito ordinečnega methods in methods of characteristics. Tukaj method vse nasluži vse neutron transport equation, boljsmene equation for angular flags in k-effectiv. And the stahastic method in Monte Carlo finds the parameters of interest. For example, k-effective or reaction rates by simulating the random walk of individual neutrons. zelo je nebezglasno neutromké začanje. Geneskog senatija vs. multigrušnje. Kaj je to? Montokarlok vse može vzgledati ni vse tega dve reprezentacije na vsebne data. Geneskog senatija je zelo na vsebne data počke, ki jih bi vsebne in df file, nekaj kondensacije. ASE format data libraries are prepared using the enjoy code, for example. So in a way it's also multi-group, but we have very many groups. It's like all available points is separate groups. With this we call continuous energy and multi-group means that nuclear data condensed in energy, using the energy group structures. Similarly to conventional deterministic codes. Most of the modern Monte Carlo codes, MCNP and Serbian, two examples are based on the continuous energy representation of the nuclear data, but still there are codes around, which uses either both or multi-group approach. So our MATLAB exercise is based on multi-group representation of the nuclear data. I mentioned yesterday this 421 group structure. So terminal, we continue with terminology, analog versus non-analog. So analog Monte Carlo means explicit. So the simulation as is. We take individual neutron and we consider its life from emission to absorption without any simplifications. And non-analog Monte Carlo is everything. All simulations which uses simplifications, tricks, acceleration techniques, in general we call non-analog. So not explicit. So a few words about mathematics. The key parameter important for specific simulation is random variable. And this is a variable whose possible values are numerical outcomes of a random process experiment. For example, flipping a coin or rolling a die, or taking a card from the stack. And x, this random variable can be discrete. For example, I mean taking one of a specified finite list of values, for example, number of dots on a die space, or continuous. It means it can take any numerical value in a specified interval and the example is atmospheric pressure. So continuous random variable uniformly distributed between zero and one is denoted xi. And all other random numbers which we will need will be derived from this number from zero to one by multiplying, by a factor we need. So the basis is number generated from zero to one. In MATLAB exercise we are using MATLAB built-in function which is called pseudo random number generator, RAND, which is based on this algorithm, Mercent Wister algorithm. If you are interested you can look at Wikipedia. But the interesting thing, I think they considered this algorithm as very good and you could not see the source. For some functions built-in functions in MATLAB you can open and see the source. For this function you cannot. It's kind of hidden. So the random variable, the next important concept is probability density function, pdf, denoted f as function of x. So it describes the relative likelihood for the continuous random variable x. So the probability of different values of random variable could not be the same, could be distributed somehow, and this distribution is described as probability density function. For example, uniformly distributed pdf. It was not so easy to find the example from the life. I found this when you rotate the bicycle wheel, then there is the angle between the valve and horizon is uniformly distributed pdf. Or this wheel of fortune, whatever. And another example is atmospheric pressure, but this is normally distributed pdf. So there is most probable value and then some distribution, which is decreasing when we go from average value. OK, so f is pdf and this dp, which is pdf multiplied by dx is the probability for x to have a value between x and x plus dx. And correspondingly, if we integrate from a to b, we have probability of x to be in this interval as written in the formula. So the total area below the pdf curve is equal to, yes, it's 1. So the answer is simple, it's 1. It means something will happen. When we integrate from minus infinity to plus infinity, it must be 1. OK, so the next is cumulative distribution function. We mentioned cdf, and this is probability that a random variable takes a value less than or equal to x. So it's integral of dp, it's capital F, and basically it's integral of pdf. As you can see, there are three examples of normal distribution of three random variables and then when we integrate, we have these three corresponding colors, which changes from 0 to 1. OK, the next important concept is sampling. We have a population, we have, for example, neutron histories, which is a really big, a very big number of events, and we would like to select the more limited part of the total population, but we would like to select it in a clever way in order to be representative. So this process is called sampling, and this is selection of random values according to the probability distribution cdf or pdf, depending on the method, with a goal to represent with these few values the whole population. So if some values are more probable, we would like to have in our samples more of these values, but the less probable we would like to have less or not have at all. So here I tried to show some sampling, so it's like when we make a survey in some area, for example, we would like to ask people, which are representative to the population in this area, something like this. So there are many sampling approaches. I give you here what we use in the exercise, the simplest. So we generate xi, which is uniformly distributed between zero and one, and we use this value to generate random values for parameters of interest using cumulative distribution function by inverse method. So there are much more techniques. So in a little bit more details, sampling by inverse method, it is done using the inverse of the cumulative distribution function. So it means that we generate xi, and then we assume that cumulative probability of the event equal to xi, and then we find by inverse function the value, which is simple. So it changes from zero to one. We make some random value of xi, and then we try to find x like this. So we know cumulative distribution function, and for some cases we can find analytically the inverse function, and then it's very simple to make this, to solve this problem. But if you don't know analytical solution, we still can use numerical. We can start integration from here, and for PDF, yes. So we can take corresponding PDF function and start integration, and when we obtain the area behind the PDF function equal to xi, then we stop, and this will be our x. So this is what we will do in muscle-up exercise, so kind of numerical integration. So now neutron tracking. So we know a little bit what we need from background, from mathematics. Now we start the simulation of neutron walk. So neutron tracking is simulation of a single neutron movement through the different material regions of the reactor core. There are two important terms. A neutron track is the length of path that neutron makes between two interactions. So we can have our fuel, kleding and coolant here. The neutron is born here, and it would like to fly here, and here we have next interaction. So this is a neutron track. And neutron history is an entire set of tracks made from initial emission to final absorption. So if you have scattering here, so a neutron can go here, here, and be absorbed in the cladding. So this is neutron history, the combination of all neutron tracks. Okay? So one of the most important parameters which we should decide is free path lengths in homogeneous medium. So when neutron is born here, we decide it somehow on the direction that it flies here. And now, based on the characteristics of this, let's say it goes in this direction, so it's homogeneous, the same material, based on the characteristics, we should decide where will be the point of next interaction. Here, or here, or here. So we decided based on the properties of this material, which is cross-section. So macroscopic cross-section is interaction probability per path length traveled by neutron. It's written here. So it's derivative of interaction probability with respect to the length. And then we can consider what is the increase of probability to have the first interaction moving from x to x plus dx. Let's say that neutron arrives here from point zero, comes here, and we would like to know the increase of interaction probability at the interval dx. Okay? So it will be the product of p zero, which is probability not to interact. So in order to even consider, the neutron should arrive here, it means it should not interact from this point to this point. It has probability, which is p zero. And then we multiply by dp, and we find the increase of the probability of first interaction on this interval, which is written here. And then if it takes minus this probability of interact, then we have decrease of probability not to interact, and we can insert the previous equation and arrive to equation, which links dp zero with p zero and dx. So now we can integrate this equation, and we have non-interaction probability, which is exponent with total cross-section under exponent with minus. Good. So we have non-interaction probability as a function of the path, the length with total cross macroscopic cross-section as a coefficient. Okay. Now we can insert everything in this increase of probability that neutron has first interaction on this interval, do a little bit mass, and finally we can arrive to pdf of free path length, which is total macroscopic cross-section multiplied by this exponent. From this pdf we can integrate and find the cumulative distribution, cumulative density function of free path length, and then find the inverse function using inverse method, we can come to the law how to sample the free path lengths by inverse method. So, when we are in this point, so then we can calculate the random variable from zero to one, take natural logarithm divided by the macroscopic cross-section, take minus, and then we obtain the estimate of the free path in homogeneous material, what is important. And here I put how it is coded in our MATLAB exercise, just repetition of this formula in MATLAB language. Okay. We derived this free path formula for homogeneous materials, but in reality we have different materials with different sigma total, with different characteristics, different transparency, let's say for neutrons, and we should do something with this. So, this free path length is valid in homogeneous material when sigma total is independent on space coordinate and for heterogeneous material which is a combination of several materials, the collision probability will change each time a neutron crosses a cell boundary. So, what to do? There are two options I propose. The first one we simply stop here and then change, calculate again free path. So, we don't change anything, except that next interaction point will be recalculated based on this material and we do it at every boundary. And this method they call ray tracing. In the second possibility we do not stop neutron at boundary surface, but instead we consider for each material fictitious cross sections which equalize total cross section of all material. And this method is called delta tracking. I will stop a little bit. No details for both. So, for ray tracing we assume that the free path lengths sampled for material one. So, this is material one. We used this sigma total for material one to find the next interaction point. It occurred to be here in other material and we need to reach just the coordinate of the next collision. If we don't want to generate new random number, we can use previous random number and kind of recalibrate. But for me it's too complicated. I prefer that we stop at the boundary. We cut the path. Neutron is not interact here, but we stop here. We generate new random number based on new sigma total for material two and we find the new position. It's simple. But the problem here that it's straightforward, but the problem that in our algorithm we always should find this point of the boundary. So, we should calculate the distance from any arbitrary point to the boundary. This could become very expensive for complicated geometry like reactor core. When we have a lot of curvature boundaries, then the mathematics becomes expensive, difficult to find all these boundaries. The alternative is delta dragons. The goal is to sample the next collision point without handling the surface crossings. And this is an acceptance rejection technique proposed by Woodcock in 60s. And it is used in serpent Monte Carlo coat as a basic algorithm. It's optional in other course also. And it's based on a concept of virtual collision. Observe the scattering. I think I skipped this. Described it in more formal way. But the key idea that we add an appropriate virtual collision cross-section we artificially add some cross-section, virtual cross-section for each material in such a way that the modified total cross-section is the same in all materials. So, it's illustrated here. For example, the red material which we have here is the fuel. It has the highest total cross-section. So, for say, in other two materials we add artificial virtual cross-section in order to make it equal for the whole model. So, after these, we have pseudo homogeneous material. So, it's everywhere total cross-section is the same. But some component of every cross-section is virtual, added. OK. So, this eliminates the need to adjust free pass length each time Newton enters new material and the need to calculate surface distances. We don't care anymore about boundaries. We just should know at every interaction point where we are in which material. So, the virtual collision cross-section is given by this formula where sigma m is a marjorand. Marjorand is the maximum total cross-section in the system which is the same for all materials. When we have the model of our reactor core, for example, we have material which have the biggest total macroscopic cross-section, most likely absorber. And this sigma total will be marjorand for our model. So, for every material we calculate marjorand minus total cross-section of this material and this will be this added virtual cross-section. So, in delta tracking starts with sampling the free pass using the marjorand. It's always using the marjorand to calculate the free pass. Not real sigma total of this material but marjorand. And at the new collision point the collision type real or virtual is sampled by generating the random number and comparing it with fractional probability of virtual collision. It's like normal interaction. There can be scattering, absorption or virtual collision. And if this probability is bigger than our random number we consider it's virtual otherwise it's real. If the collision is real then we think, we sample whether it's absorption or scattering and if it's virtual we don't do anything. We just continue. Nothing changes. We checked virtual continue. In other words, in other view point what we are doing. We have the model of our system and we move through this system in random work with the small steps and these small steps free passes are determined by the most opaque material the material is the biggest macroscopic cross-section so we go with small steps through our system and when we realize that this is unnecessary small so for this material we go with this step and every time it's virtual because it's determined by another material so then it's virtual, virtual, virtual and at some point it becomes real. To go with free pass step which is not determined by the current material but which is determined by most absorbing material with the biggest macroscopic total cross-section. The advantage of this delta tracking it doesn't matter if the neutron crosses one or several material boundaries between two collision points or where the collision point is what is the total cross-section disadvantages we don't require surface crossings at all we can use only collision estimator of neutron flux it means that we could not calculate neutron flux using the number of crossing neutrons so we can only calculate the number of scattering events in some volume and then divide by total scattering cross-section and find flux in this way so it's called collision estimator because we don't record surface crossing we could not estimate surface flux and current using delta tracking we can do it but only the outer geometry boundary not inside the system and another disadvantage when there is a small volume of heavy absorber for example in the geometry then it determines the much around and the efficiency is reduced so then we need some combined some hybrid solution when we combine ray tracing and delta tracking I think it's very efficient delta tracking algorithm and we will use it in our exercise we talk about neutron tracking so we simulate the neutron movement in our in our model in our system from birth to disappearance to death for many neutrons and now we think what to do with this we should somehow estimate the results and yes it's collection of the result the second part of Monte Carlo game we called recorded events it's scores and these scores are combined to obtain statistical estimates so it's for me it was easier to understand when I compare the collection of these results in Monte Carlo method similar to measurements in experiment and based on evaluation of flux integrals like this so when we integrate some response function multiplied by flux over time volume and energy so it's like detector we have detector in real life when we make experiment we measure some events like fission chamber for example we integrate it we measure it during some time in defined volume and in defined energy if you can separate some energy interval and the same is going on happens with Monte Carlo method so the f is a response function for example then we estimate flux or it can be macroscopic cross-section then we estimate reaction rate and integration over time is equivalent to averaging over many neutron histories and always we should apply normalization so to collect the results we are using batches so all scores from one generation of neutrons are grouped in a single batch so when we say batch we can think that it's the same as one generation of neutrons so number of neutrons in one batch we denote i capital it can differ from batch to batch and we have number of batches and so the total number of histories is a product of these two and then to estimate the reaction rate in the batch generation and we sum up which is a kind of approximation of the integral we sum up all reaction rates like this and we have then the reaction rate estimate of reaction rate for generation ok this estimate which we can in general notifies x this estimate for every batch is random parameter not so interesting it's changing from batch to batch it oscillates somehow but what is more interesting is statistically averaged mean values averaged over the batches over generations so statistically averaged plus standard deviation plus accuracy which we try to predict how accurate we know we predict the results so a few words about statistical accuracy the mean value is the result it's calculated as average and standard deviation statistical accuracy is calculated like this this is we call it one sigma we also use terminology variance which is the square of sigma and relative statistical error which is standard deviation divided by average value mean value as already mentioned yesterday always when you have Monte Carlo simulation please always give the results in this form average plus minus one sigma the law of the large numbers relatively evident so the longer we run the simulation the closer we to the real solution the closer to the mean of the results to the expected value the longer we run our exercise the closer we to the expected value qualitative meaning so the statistical accuracy how much the mean value is likely to deviate from the expected value or how much results of two identical but independent simulation to differ in any case statistical accuracy of the simulation is not the same as physical accuracy of the simulation also trivial but don't confuse to accuracy another important theorem central limit theorem to find statistical accuracy of estimator we need in addition to standard deviation to know the probability distribution function because we should apply our one sigma we should understand what one sigma means and for this we are using central limit theorem which states that the sum of large number of arbitrary distributed random variables is itself a random variable following the normal distribution so when we roll one dies numbers from 1 to 6 they distribute uniformly but when we when we roll like 5 dies and calculate the sum and make very many experiments then we will see that distribution of the sum is normal so you can make this it's not so evident but it is yes so there is a most probable value for the sum of 5 dies this is the formula for Gauss distribution and the assumptions here when we do this experiment for example with dies the assumption is that distribution is the same for each term in the sum they are all the same all dies the values are independent so they don't impact each other and both mean and standard deviation exist and are finite this is the basis of our using the batches for calculation generations because when we average our generation our values of our interest then they will be distributed according to normal distribution to Gaussian distribution when we know that this is Gaussian distribution we know confidence interval we know what the sigma and what is probability to be in different intervals so it's very well known distribution and what one sigma means what two sigma means and so on so we very well know what our statistical accuracy mean ok a few words about non-analog Monte Carlo very quickly this is statistical trickery so everything what deviates from honest analog calculation we can say it's non analog we are doing this to accelerate calculation in particular to improve statistics on reaction rates we can estimate the flux important when reaction rate is low so we don't calculate for example the very small very small probability reaction but we calculate scattering which has large probability and then we evaluate the flux by dividing cross-section and knowing the flux we can multiply it by this small macroscopic cross-section and understand the reaction rate but if we would sample honestly and calculate this rare reaction rate then it would take forever because then we need very many histories to have enough events this is one trick another trick is in order to improve the random walk algorithm ok in order to score more frequently the neutrons having largest contribution to the results to get rid of the neutrons with low importance so we introduce we introduce the idea of the weight so neutrons become different some of them more important others less important ok, consider in results estimate I already mentioned that we can score physical interaction for this analog method we honestly score all physical interaction for individual reaction as they are and for non-analog we estimate flux and multiply it by the response function like macroscopic cross-section and the flux can be found by different estimate we also mentioned like collision estimate track length estimate surface and current estimate well track length it means that we calculate the length of the track and we interpret it as flux so statistical weight this is the second part when we distinguish neutrons by the importance in case of analog there is no weight it's one so it's single particle but for non-analog we can say less than it's like several particles or below one less than one particle ok, k effective of the cycle is the total weight of neutrons in the system divided by the number of neutrons bore if you think about in terms of generation that this is a ratio of number of neutrons at the end of the generation and number of neutrons at the beginning of generations multiplication factor so, but we are using now the total weight as a non-analog method to calculate k effective and at the beginning of each cycle the total weight of neutrons is normalized to number of neutrons bore so number of neutrons bore is a fixed value and we always normalize to it because as usual in critical in Eigen value problem and normalize the flux we do the same for Monte Carlo so it's equivalent to dividing the efficient source by k effective in deterministic method and here is extract from the code so when the neutrons is too heavy we can split it and when it's too light we can terminate it and in this way we accelerate our calculations we increase the number of important neutrons in important places and we remove the neutrons which are not important but takes time to calculate one of the trick non-analog method to terminate not opponent neutron as Russian roulette it's just a formal algorithm how to terminate them so so here probably I don't stop on the details it's described and it's coded in our script continue the splitting cause also trivial when we have the efficient that weight of the neutron is increasing and we would like we can decide and use the algorithm in order to split and instead of one heavy neutron two neutrons for example so it's described here and now when we learn about this non-analog methods we can briefly repeat how we decide about interactions when we know that in this point there is a real interaction and it's not virtual then we should decide what kind of interaction but first we decide virtual or not I already described this we generate a random number and compare with fractional probability of virtual interaction and then it's either virtual or real if it's real we decide whether it's scattering or absorption again we are doing the same but know that it's not much around anymore material we have real cross-section we forget about much around we are now again in real material and this is partial probability of scattering we generate random number and decide whether it's scattering or absorption so there is like total probability of interaction and this is like sigma s divided by sigma total and this is fractional probability this is 1 and then we generate random number from 0 to 1 if it hits here and scattering if it hits here it's absorption so for scattering and non-analog it doesn't change and here is important point that scattering we can assume isotropic or anisotropic if it's anisotropic ok, it's too complicated and I don't say anything here and send you to Yako's thesis for example if it's isotropic then it means we sample direction and energy independently why, anisotropic they are not independent they depend so when here is scattering we should decide where it goes and we decide it, we sample it first and then we should decide what is the energy of secondary neutron and again we do it as a second step ok, it's described here what we are doing we generate two random numbers we should decide the direction we need two numbers it's described here how it is done this plot shows practically we decide on two angles and then we use these two angles to find the direction x direction y we don't calculate direction z we can we are in one-dimensional problem so we need only x and y well, it's two-dimensional all direction are equally probable I need to think about this I'm quite sure it's isotropic but probably something wrong ok pi's goes like this and pi's goes like this and then you cover everything with this ok, could be could be that because we observed some difference with certain predictions can be that we will find the reason of this but ok, I need to think what you said thank you well, ok directions, then the energy is sampled by the inverse method yes using integration of cdf and cdf is ratio of integral of scattering cross-section ok, it's written here so we integrated numerically if we go from e prime to infinity then we have one so we go from zero to one at the moment when we reach again, as I tried to explain here so when we reach this value of xi randomly generated random number from zero to one then we accept this energy as a energy of secondary neutron yes, probably to not properly explain but I managed to code this in one line in Matlab using group structures and probably if you understand this line during your group work then you will understand how inverse method is working when we make numerical integration it's relatively simple ok, if this is not scattering that it's absorption and sculpture plus fission there is a lot of methods how to distinguish between capture and fission and we are using the most simple method which says in our exercise so we say neutron is not terminated but its weight is changed by the eta value and eta value is a number of neutrons absorbed it's exactly this ratio this is production cross section new sigma f and this is absorption cross section and this is eta so we don't distinguish it in fact between capture and fission but we change the weight by this ratio and this means that we automatically terminate neutron in non-multiplying regions because it will be sigma production will be zero so we will clean up remove these neutrons and the energy of neutron generated we always consider that energy of neutron is changing during this event so we think part of the neutrons will be emitted during the fission and we change the energy according to the fission spectrum so it's similar to what we did for scattering but it's probably a little bit simpler that we have the fission spectrum distribution and the area behind is one we generate psi and then we integrate numerically and as soon as we have the area equal to psi then we accept this as energy of fission neutron of neutron after absorption that's what we have in our algorithm and that's it so I can so it's almost one hour I can go very quickly through the exercise so it starts with some initialization of parameters like number of source neutrons, number of cycles active and inactive cycles the geometry then we have the specification of the cross sections for fuel, cladine and coolant so this is a place where you change the composition of unit cell then we calculate much around as a maximum between fuel, cladine and coolant sigma total we have number of groups then we define the detector and I highlighted this because this is a place where you should initialize your new detector then we identify four vectors which is two coordinates weight group number we specify the initial neutron source or the distribution of neutrons I think they are all distributed in uniformly in fact over the whole unit cell we give the samples in neutron energy group according to fission spectrum we prepare some vectors and then we start the main power iteration loop as I mentioned we normalize the weight to have at the beginning the same number of neutrons and then we have the loop over all neutron histories and this loop decides there is a while loop before the neutron is absorbed so this while loop in fact goes from emission to absorption so it's neutron life if it's not virtual collision we generate free path first we sample and if not virtual collision we calculate the direction here we should check what we are doing we move neutrons to new position and then we decide if neutron go outside our unit cell then we should bring it back if we start here and arrived here then we should find this point here so it's like shift boundary conditions repetitive boundary conditions this is done in this block this block here and then this block defines the geometry so it specifies the macroscopic cross-section depending on the dimensions over x and yes then we have we find some total cross-section virtual cross-section here and we sample the type of the collision now we start from virtual then we decide this is real and we sample whether it's scattering or absorption and here we have already detector which calculate the spectrum we sum up all scattering events and divide by total scattering cross-section to obtain the flux spectrum of flux or in energy group over the whole system and here there is a place where you can introduce your new detector and then for absorption we decide here we multiply by eta I think here and then generate again according to fission spectrum new energy group ok, then we use Russian Roulette to terminate low weight neutrons and we clean up the killed neutrons and so on, clean up everything and we split too heavy neutrons here and finally we calculate k-effective as ratio of total weight at the end of the generation and here we calculate k-effective expected value as an average of all cycles from the first one to the current one and we calculate the standard deviation according to the formula for one sigma I showed you, that's it Thank you for your attention